<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Innovation Process Technology AG (ipt)</title>
    <description>The latest articles on Forem by Innovation Process Technology AG (ipt) (@ipt).</description>
    <link>https://forem.com/ipt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ipt"/>
    <language>en</language>
    <item>
      <title>Beyond the Pod: Why wasmCloud and WebAssembly Might Be the Next Evolution of the Platform</title>
      <dc:creator>Jakob Beckmann</dc:creator>
      <pubDate>Tue, 21 Oct 2025 05:08:44 +0000</pubDate>
      <link>https://forem.com/ipt/beyond-the-pod-why-wasmcloud-and-webassembly-might-be-the-next-evolution-of-the-platform-1i3e</link>
      <guid>https://forem.com/ipt/beyond-the-pod-why-wasmcloud-and-webassembly-might-be-the-next-evolution-of-the-platform-1i3e</guid>
      <description>&lt;p&gt;Over the past few months I have invested some time to contribute to an open source project I find fascinating: &lt;a href="https://wasmcloud.com/" rel="noopener noreferrer"&gt;wasmCloud&lt;/a&gt;. As a platform engineer and architect, I am very familiar with how software platforms are typically built in practice. However, with the ubiquity of Kubernetes, you run the risk to being stuck in the "doing it the Kubernetes way" line of thinking. But then again, are there any better ways? This is where wasmCloud caught my attention. A modern platform building on proven concepts from Kubernetes, but with some significant differences. In this article I want to introduce wasmCloud, how it compares to Kubernetes, what its internal architecture looks like, and what ideas are, in my humble opinion, a step up from "the Kubernetes way of things".&lt;/p&gt;

&lt;p&gt;Before getting started, I need to get some things out of the way. This article will make quite a few comparisons to Kubernetes and bytecode interpreters like the JVM. If you are unfamiliar with these technologies, it might make sense to have a short look at what these are. Considering you clicked on this article, I am however guessing that you are familiar with them and have some experience in platform engineering practices, either as a poweruser of a platform, or as a designer and developer of one.&lt;/p&gt;

&lt;p&gt;Moreover, I want to thank the company I work for, &lt;a href="https://ipt.ch/en/" rel="noopener noreferrer"&gt;ipt&lt;/a&gt;, for allowing me to invest time to learn about new technologies such as wasmCloud. Not only is contributing to open source a great way to pay back a community powering the modern world, it is also a huge passion of mine. Being able to help the development of such projects during paid worktime enables me to learn so much on emerging technologies, and maybe help build the revolutionary tools of tomorrow.&lt;/p&gt;

&lt;p&gt;So... wasmCloud!? I have been interested in WebAssembly ever since it promised to replace JavaScript, a language I personally consider as extremely poorly designed (someone once told me it was designed in three days, so no wonder there). While WebAssembly is very far from doing anything close to replacing JavaScript in the browser, it has evolved into something else: an application runtime and a potential replacement for containers.&lt;/p&gt;

&lt;h1&gt;
  
  
  WebAssembly as a Platform Foundation
&lt;/h1&gt;

&lt;p&gt;Modern platforms nearly all build on top of containers as their foundational element to run executable code. This is a logical evolution from Docker's meteoric growth, and the ecosystem that grew around its open standards (such as the &lt;a href="https://opencontainers.org/" rel="noopener noreferrer"&gt;OCI - Open Container Initiative&lt;/a&gt;). While containers provide a huge step in terms of ease of use, standardization, and security compared to shipping raw artefacts to virtual machines, as was the case before them, they do have some shortcomings.&lt;/p&gt;

&lt;p&gt;First and foremost, containers are not composable. In part due to their flexibility, they do not offer standard ways of expressing how the world should interact with them at runtime, or what they rely on to perform their functionality. This means that containers are typically deployed as REST-based microservices, where containers communicate with one another over a network using APIs agreed upon outside of the container standards. This lack of standardization makes building reusable components more challenging than it has to be. Moreover, each container essentially needs a server, authentication, authorization, and more to run. This results in quite some waste in the compute density of the platform, with lots of compute wasted on boilerplate.&lt;/p&gt;

&lt;p&gt;Moreover, while containers are a huge step in the right direction in terms of security, they are not quite as secure as most people are led to believe. Containers are "allow by default" constructs, which take quite some work to properly harden.&lt;/p&gt;

&lt;p&gt;Finally, due to how containers are typically built, their startup times are not that great. It is not abnormal to see container start times in the dozens of seconds. This does not bother people very much because containers are mostly used to run long running processes (since we need these REST APIs everywhere). However, a large part of containers are mostly idle, waiting for some API request to come in. If one considers that workloads could be called (and thus the process started) only when needed, startup times over 100ms is considered slow.&lt;/p&gt;

&lt;p&gt;This is where WebAssembly comes it. WebAssembly addresses these challenges. Composability is addressed by the component model.&lt;/p&gt;

&lt;h2&gt;
  
  
  WebAssembly: The Component Model
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://component-model.bytecodealliance.org/" rel="noopener noreferrer"&gt;component model&lt;/a&gt; is a way that WebAssembly modules can be built with metadata attached to them which describe their imports and exports based on a rich type system. Moreover, they are composable such that a new component can be built from existing components as long as the imports of one are satisfied by the exports of another. This means that components can interact with one another via direct method/function calls, whose specification is fully standardized. This interface specification is declared in a language known as the WebAssembly Interface Types (WIT) language. An example of a WIT specification of a component relying on a system clock can be seen below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package wasi-example:clocks;

world mycomponent {
    import wall-clock;
}

interface wall-clock {
    record datetime {
        seconds: u64,
        nanoseconds: u32,
    }

    now: func() -&amp;gt; datetime;

    resolution: func() -&amp;gt; datetime;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;WIT can be compared to the &lt;a href="https://en.wikipedia.org/wiki/Interface_description_language" rel="noopener noreferrer"&gt;Interface Definition Language (IDL)&lt;/a&gt; from gRPC but for wasm components.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This declaration says that the component relies on an interface &lt;code&gt;wall-clock&lt;/code&gt; (it &lt;code&gt;import&lt;/code&gt;s the interface) which defines two functions: &lt;code&gt;now&lt;/code&gt; and &lt;code&gt;resolution&lt;/code&gt;. Both take no arguments and return a &lt;code&gt;datetime&lt;/code&gt; object consisting of a &lt;code&gt;seconds&lt;/code&gt; and &lt;code&gt;nanoseconds&lt;/code&gt; field. This component could then be composed with any other component which exports this &lt;code&gt;wall-clock&lt;/code&gt; interface.&lt;/p&gt;

&lt;p&gt;If this were a container which would rely on accessing some API, we would need to read a non-standardized documentation of the container image, and then read up on other containers to ensure they provide APIs that match the ones called by the first container.&lt;/p&gt;

&lt;p&gt;The WebAssembly component model can essentially be seen as a form of contract-based programming to formalize interfaces between WebAssembly core modules.&lt;/p&gt;

&lt;h2&gt;
  
  
  WebAssembly: Secure by Default
&lt;/h2&gt;

&lt;p&gt;Whereas containers provide some form of security by namespacing processes and filesystems, WebAssembly actually sandboxes modules such that they cannot affect one another, or the host they run on. By default a WebAssembly module cannot perform any privileged action and needs to be granted explicit permission. I will not dive deeper into the details of this or I might loose myself in a rant on how software security in the modern day and age is abysmal.&lt;/p&gt;

&lt;h2&gt;
  
  
  WebAssembly: Performance
&lt;/h2&gt;

&lt;p&gt;WebAssembly's main goal is performance. This means that WebAssembly modules run fast, but also that loading modules and starting them is much faster than containers. This has proven to be very useful already, for instance in use cases such as serverless computing, where hyperscalers heavily rely on WebAssembly as a runtime to reduce cold start times, and reduce the delay in function calls.&lt;/p&gt;

&lt;p&gt;Considering the idea to avoid having long running servers providing REST APIs and move to raw function calls on short running modules, having extremely short start times is imperative.&lt;/p&gt;

&lt;p&gt;Alright, so we can see that WebAssembly can be a great choice for the foundation runtime of a platform. So where are platforms leveraging this? Well, actually, quite some "platforms" leverage this idea already. For instance, &lt;a href="https://www.spinkube.dev/" rel="noopener noreferrer"&gt;SpinKube&lt;/a&gt; does exactly this, enabling to run WebAssembly functions on Kubernetes. However, you still interact with these functions via a REST call. Another example is &lt;a href="https://www.kubewarden.io/" rel="noopener noreferrer"&gt;Kubewarden&lt;/a&gt;, leveraging WebAssembly modules to evaluate policies. While some might argue that this is not a platform, Kubewarden provides a runtime for arbitrary programs, including their scheduling and deployment. Sounds like a platform to me.&lt;/p&gt;

&lt;p&gt;Finally: wasmCloud! wasmCloud is probably what people would consider the closest to a full blown platform to run WebAssembly modules. In other words, what Kubernetes is to containers, wasmCloud is to WebAssembly components. It provides a way to deploy, schedule, link, and lifecycle WebAssembly components on a distributed platform.&lt;/p&gt;

&lt;h1&gt;
  
  
  wasmCloud Architecture
&lt;/h1&gt;

&lt;p&gt;Let us look at the wasmCloud architecture a little.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This section will contain quite a few comparisons to Kubernetes concepts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Generally, the wasmCloud architecture can be seen as quite similar to the Kubernetes architecture, with the difference being that wasmCloud does not provide as much flexibility in swapping out building blocks as Kubernetes does. This makes sense as it is a more nascent technology and is currently more opinionated.&lt;/p&gt;

&lt;p&gt;As a reference, here is the diagram wasmCloud uses to provide an overview of the platform:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn21migl5ljd0uj9zvg5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn21migl5ljd0uj9zvg5z.png" alt=" " width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As one can see, the architecture is essentially a set of hosts connected via a so called "lattice". Thus, the architecture distributes the runtime over a set of compute instances in order to achieve resilience against hardware/compute failures. The principle is identical to the one from Kubernetes, providing a cluster in order to be able to quickly shift payloads on the platform to different nodes in case of node failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hosts
&lt;/h2&gt;

&lt;p&gt;wasmCloud hosts are the foundation of the compute platform that is provided. They are the equivalent of Kubernetes nodes and provide a WebAssembly runtime for components to run on. Just as with Kubernetes nodes, application developers will rarely need to worry about the hosts other than for deployment affinities and the like.&lt;/p&gt;

&lt;p&gt;In practice, hosts can be anything from a virtual machine, an IoT device, or even a pod running on Kubernetes. In fact, hosting wasmCloud on Kubernetes is a relatively straight forward way to get started with the technology, providing wasmCloud as an application runtime, while providing services via Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lattice
&lt;/h2&gt;

&lt;p&gt;The wasmCloud lattice is its networking layer. This can seem a bit strange when considering that this a &lt;a href="https://nats.io/" rel="noopener noreferrer"&gt;NATS&lt;/a&gt; instance.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For those unfamiliar with NATS: it is an event streaming component similar to Kafka, but provides additional features such as a key values store, an object store, and publish-subscribe capabilities.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Having a NATS instance as the "networking layer" confused me quite a lot at first. However, one has to remember that thanks to the component model, we no longer require HTTP/TCP network calls for our components to interact with one another. Thus we don't necessarily need an IP to address a component we want to reach. Of course NATS itself will require a physical network to run on in order to distribute events to its different instances, but wasmCloud then only needs to use NATS.&lt;/p&gt;

&lt;p&gt;Essentially, every component exposing a function becomes a subscriber to a queue for this function on NATS. Other components can then call this function via wRPC (gRPC for WebAssembly) by publishing a call to some subject. This is quite different from Kubernetes networking, where calls need to know the location of the callee in the network. Using a subject-based addressing model simplifies deployment and improves scaling and resilience.&lt;/p&gt;

&lt;p&gt;As a user of wasmCloud, you do not need to worry about this though. How function calls are preformed under the hood is abstracted away from the user.&lt;/p&gt;

&lt;p&gt;This distributed networking aspect is one of the superpowers of wasmCloud, as one does not need to worry about how to address a component on the platform. However, it can also introduce strange behaviour in some cases. For instance, on Kubernetes, it's common sense that a HTTP call to a different pod running on the cluster can fail. On wasmCloud however, if the interface we are calling from a different component returns some type, we use the component like a raw function call in our components code. What if that call fails, not because of the called component but due to a networking issue? In the current implementation of wasmCloud this will lead to a panic in the caller. As this is typically not the desired outcome, efforts are underway to design an adapted way how the interfaces need to be designed to handle failures in the transport layer. On top of that, function calls might change such that might avoid using NATS as a transport layer if the component being called in on the same host and the caller.&lt;/p&gt;

&lt;h2&gt;
  
  
  Capabilities
&lt;/h2&gt;

&lt;p&gt;This is where Kubernetes and wasmCloud start differing in their philosophy. Thanks to the standardized way interfaces can be declared in the component model, one can describe an abstract interface which provides some functionality, without providing an implementation. This is what capabilities are. They are abstract interfaces that describe some useful functionality, such as reading and writing to a key value store, or retrieving some sensitive information from a secured environment. These capabilities are published on wasmCloud for applications to use.&lt;/p&gt;

&lt;p&gt;An application developer can then write a component that makes use of that interface if he/she needs that functionality. He/she does not need to worry about how this capability is implemented. He relies on the "contract" provided by the capability.&lt;/p&gt;

&lt;p&gt;In my opinion, while this is quite challenging to grasp initially, this is what makes wasmCloud so promising. Having worked on many platforms in the past, the main challenge is always how additional services can be provided on top of raw platforms such as Kubernetes in a way that makes then highly standardized while easily consumable. In the current state of platform engineering, this quickly becomes a question of good product management. Unfortunately, doing this correctly is surprisingly difficult. Capabilities provide a technical solution to this, with the only limitation being complete incompatibility with existing software.&lt;/p&gt;

&lt;h2&gt;
  
  
  Providers
&lt;/h2&gt;

&lt;p&gt;A provider is a specific implementation of a capability. For instance, taking the example of the capability enabling the reading and writing to a key value store, a provider might implement this by having a &lt;a href="https://valkey.io/" rel="noopener noreferrer"&gt;ValKey&lt;/a&gt; instance backing the capability. Another provider might implement the very same capability using NATS, Redis, or even an in-memory key-value store.&lt;/p&gt;

&lt;p&gt;Abstracting the provider away from the consumer via a capability enables the platform to swap providers based on needs. Of course performing such a swap might be quite complex, for instance involving a data migration from NATS to ValKey. However, the beauty is that the applications do not require any changes as would be the case in traditional platforms.&lt;/p&gt;

&lt;p&gt;It should be noted that the provider might run completely outside of wasmCloud itself. However, wasmCloud also provides internal providers that are backed into the hosts themselves, providing functionality such as logging or randomness.&lt;/p&gt;

&lt;h2&gt;
  
  
  Components
&lt;/h2&gt;

&lt;p&gt;Components refer to the WebAssembly payload that contain your business logic. In the traditional sense, this is your application. However, in wasmCloud lingo, an application is a set of interlinked components including all information about what capabilities they require.&lt;/p&gt;

&lt;h2&gt;
  
  
  Applications
&lt;/h2&gt;

&lt;p&gt;Applications are an abstraction enabling to declaratively define a combination of components, capabilities, and providers together into a deployable unit. Applications are based on the &lt;a href="https://oam.dev/" rel="noopener noreferrer"&gt;open application model (OAM)&lt;/a&gt; and should thus look quite familiar to people working with Kubernetes. In terms of definition, they are similar to a Kubernetes Deployment, describing not only the deployment unit (component or pod in the Kubernetes context), but also its replication, affinities, links to capabilities, etc.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It should be noted that in wasmCloud v2, applications are re-worked to be much more closely modelled after Kubernetes Deployments and ReplicaSets. Version 2 drops the idea of Applications alltogether and uses &lt;code&gt;Workload&lt;/code&gt;, &lt;code&gt;WorkloadReplicaSets&lt;/code&gt;, and &lt;code&gt;WorkloadDeployments&lt;/code&gt; objects. These are also no longer linked to the OAM. In all likelihood we will write another blog post showcasing the capabilities of composition provided by version 2 in the future.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  wadm
&lt;/h2&gt;

&lt;p&gt;The wasmCloud Application Deployment Manager (wadm) manages Applications. It can be seen as the deployment controller from Kubernetes for wasmCloud Applications. It essentially orchestrates the deployment of components, capabilities, their links, etc. on the platform. This construct will also be dropped with wasmCloud version 2.&lt;/p&gt;

&lt;h1&gt;
  
  
  Verdict
&lt;/h1&gt;

&lt;p&gt;With a decent understanding of the architecture we can now get an idea of the uses of wasmCloud in the real world. While I have not yet run anything productive on wasmCloud, I have played with the platform a lot over the past few months, and have come to really appreciate some of its innovative ideas.&lt;/p&gt;

&lt;p&gt;Thus, to summarise my experience: wasmCloud is a relatively new platform and provides interesting new approaches to how inter-component communication can be modeled. On top of that, it does it while building on open standards such as WebAssembly and the component model, such that the business logic of your application remains portable. While these new concepts are very promising, wasmCloud still suffers from a couple drawbacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For people unfamiliar with WebAssembly, it has a quite steep learning curve. This is highly accentuated for people unfamiliar with existing platforms such as Kubernetes.&lt;/li&gt;
&lt;li&gt;The set of supported providers and capabilities is extremely small to date. This will of course grow as adoption increases, but currently early adopters will have to write their own providers most of the time and will not be able to rely on third-party components.&lt;/li&gt;
&lt;li&gt;As wasmCloud shifts more responsibility to the platform level, it will require a strong platform team to operate this with low developer friction. This can be an issue as finding highly skilled platform engineers is quite difficult at the moment. However, the team behind wasmCloud is focused on making application delivery as frictionless as possible.&lt;/li&gt;
&lt;li&gt;Finally, I am not sure I currently understand the security model wasmCloud uses to authenticate and authorize calls between components. While I am not sure this is a drawback, it does not yet feel as intuitive as Kubernetes simple yet relatively powerful RBAC. I will have to dive deeper into this to have a final opinion on it though (another blog post might follow on this).&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devex</category>
      <category>webassembly</category>
      <category>cloudnative</category>
      <category>serverless</category>
    </item>
    <item>
      <title>That Time We Found a Service Account Token in my Log Files</title>
      <dc:creator>Vincent von Büren</dc:creator>
      <pubDate>Thu, 04 Sep 2025 07:59:04 +0000</pubDate>
      <link>https://forem.com/ipt/that-time-i-found-a-service-account-token-in-my-log-files-4d00</link>
      <guid>https://forem.com/ipt/that-time-i-found-a-service-account-token-in-my-log-files-4d00</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;Disclaimer&lt;/strong&gt;&lt;br&gt;
This article assumes you're already somewhat familiar with Kubernetes concepts (Pods, ServiceAccounts) and the basics of JSON Web Tokens (JWTs).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It was a &lt;strong&gt;Tuesday&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Nothing special - just your average day as a platform engineer. My team's notifications were mercifully quiet, and I thought, "Perfect, I can finally clean up that old Helm chart that's been bothering me."&lt;/p&gt;

&lt;p&gt;I opened the repo of the underlying image written in Go to double-check the config before merging.&lt;/p&gt;

&lt;p&gt;Before I even got far, a colleague — Martin Odermatt — pinged me:&lt;/p&gt;

&lt;p&gt;“You might want to take a look at this…”&lt;/p&gt;

&lt;p&gt;He had spotted something concerning in the code:&lt;/p&gt;

&lt;p&gt;log.Println("SA Token:", token)&lt;/p&gt;

&lt;p&gt;Wait. What?&lt;/p&gt;

&lt;p&gt;A debug statement. Still in production code. Logging an actual Kubernetes ServiceAccount token. Not cool...&lt;/p&gt;

&lt;p&gt;I paused. My heart rate didn’t. Curious but mostly horrified, I took the token Martin had found and decoded the payload in my shell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"iss"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://kubernetes.default.svc.cluster.local"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"kubernetes.io/serviceaccount/namespace"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"payments"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"kubernetes.io/serviceaccount/secret.name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"payments-token-6gh49"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"kubernetes.io/serviceaccount/service-account.name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"payments-sa"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"kubernetes.io/serviceaccount/service-account.uid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"f9a2c144-11b3-4eb0-9f30-3c2a5063e2e7"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"aud"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://kubernetes.default.svc.cluster.local"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"sub"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"system:serviceaccount:payments:payments-sa"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"exp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1788201600&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Sat&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Aug&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2026&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;GMT&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"iat"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1756665600&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Fri&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Aug&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2025&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;GMT&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Default audience claim. A 1-year expiry.&lt;/p&gt;

&lt;p&gt;As we dug deeper, Martin Odermatt pointed out the underlying issue: this was a legacy ServiceAccount token, and we should be using projected tokens instead.&lt;/p&gt;

&lt;p&gt;This "bad boy" wasn't just a dev leftover - it was a high-privilege token with zero constraints floating around in plaintext logs!&lt;/p&gt;




&lt;h3&gt;
  
  
  What This Article Covers
&lt;/h3&gt;

&lt;p&gt;In this post, I'll guide you through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The inner workings of Vault authentication with JWT and Kubernetes methods&lt;/li&gt;
&lt;li&gt;What Kubernetes ServiceAccounts and their tokens are, and how they’re (mis)used&lt;/li&gt;
&lt;li&gt;How projected ServiceAccount tokens fix many of the hidden dangers of older token behavior&lt;/li&gt;
&lt;li&gt;Why you should start adopting token projection and Vault integration today&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We'll cover real-world use cases, implementation tips, and common pitfalls - so you don't end up like I did, staring at a:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"SA token:"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;...and wondering how close you just came to a security incident.&lt;/p&gt;




&lt;h3&gt;
  
  
  Why This Matters
&lt;/h3&gt;

&lt;p&gt;To really understand why that log statement gave me chills, we need to unpack a few core concepts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is a JWT?&lt;/li&gt;
&lt;li&gt;How do Kubernetes ServiceAccounts and their tokens work?&lt;/li&gt;
&lt;li&gt;And what role do these tokens play in authenticating to systems like Vault?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's start with the fundamentals.&lt;/p&gt;




&lt;h3&gt;
  
  
  What Is a JWT?
&lt;/h3&gt;

&lt;p&gt;If you've been around authentication systems long enough, you've probably seen one of these beasts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;eyJhbGciOiJSUzI&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="err"&gt;NiIsInR&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="err"&gt;cCI&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="err"&gt;IkpXVCJ&lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;This is a JSON Web Token (short: JWT).&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It's a compact, URL-safe format for representing claims between two parties. They're used everywhere: web apps, APIs, and yes — inside your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;A JWT consists of three parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Header&lt;/strong&gt; – declares the algorithm used to sign the token (e.g. RS256)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Payload&lt;/strong&gt; – contains the claims (who you are, what you're allowed to do, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Signature&lt;/strong&gt; – a cryptographic seal that verifies the payload hasn't been tampered with&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Claims are the heart of a JWT — key-value pairs that describe who the token refers to and what it can be used for. &lt;/p&gt;

&lt;p&gt;They can be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Standard claims defined by the spec (e.g., &lt;code&gt;iss&lt;/code&gt;, &lt;code&gt;sub&lt;/code&gt;, &lt;code&gt;exp&lt;/code&gt;, &lt;code&gt;aud&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Custom claims added by the issuer for domain-specific needs&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Closer Look at &lt;code&gt;aud&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;audience&lt;/strong&gt; (&lt;code&gt;aud&lt;/code&gt;) claim tells &lt;strong&gt;who the token is meant for&lt;/strong&gt;. Think of it as the intended recipient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; Imagine a Coldplay concert ticket. It says &lt;em&gt;valid for Stadium X on 01-09-2025&lt;/em&gt;. You can't take the same ticket and use it at Stadium Y — they'll reject it (...trust me, I tried).&lt;/p&gt;

&lt;p&gt;A JWT works the same way:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the token has &lt;code&gt;"aud": "https://kubernetes.default.svc"&lt;/code&gt;, then only the Kubernetes API server should accept it.&lt;/li&gt;
&lt;li&gt;If some other service receives that token, the &lt;code&gt;aud&lt;/code&gt; won't match and the token must be rejected.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without this check, a token could be misused anywhere that trusts the signing key. With &lt;code&gt;aud&lt;/code&gt;, it's scoped to the right system.&lt;/p&gt;




&lt;h3&gt;
  
  
  Kubernetes and ServiceAccounts
&lt;/h3&gt;

&lt;p&gt;Kubernetes is an open-source platform that orchestrates containers at scale. At its heart is the &lt;strong&gt;Pod&lt;/strong&gt; — the smallest deployable unit.&lt;/p&gt;

&lt;p&gt;But every pod needs an identity. That's where &lt;strong&gt;ServiceAccounts&lt;/strong&gt; come in.&lt;/p&gt;

&lt;h4&gt;
  
  
  ServiceAccounts 101
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Every Pod references a ServiceAccount (default if none is set), but a token is only mounted if enabled&lt;/li&gt;
&lt;li&gt;Kubernetes mounts the identity at:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/var/run/secrets/kubernetes.io/serviceaccount/token
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;That token is a &lt;strong&gt;JWT&lt;/strong&gt;, signed by the Kubernetes control plane&lt;/li&gt;
&lt;li&gt;It lets the pod authenticate with the API server — and sometimes even external systems like Vault&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  The Catch
&lt;/h4&gt;

&lt;p&gt;Until recently, these tokens came with dangerous defaults:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Long-lived (often valid for a year)&lt;/li&gt;
&lt;li&gt;Previous to Kubernetes v1.24, there was no default audience set (&lt;a href="https://kubernetes.default.svc" rel="noopener noreferrer"&gt;https://kubernetes.default.svc&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Automatically mounted into every pod, even if unused&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Enter Vault: The Gatekeeper of Secrets
&lt;/h3&gt;

&lt;p&gt;HashiCorp Vault is your cluster’s paranoid librarian:&lt;br&gt;
it stores API keys, certs, passwords — and only hands them out when it's sure you should have them.&lt;/p&gt;

&lt;p&gt;How? &lt;strong&gt;Authentication methods.&lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Vault Authentication Methods
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Username &amp;amp; password&lt;/li&gt;
&lt;li&gt;AppRole&lt;/li&gt;
&lt;li&gt;LDAP&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Kubernetes&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;JWT&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's zoom into the last two.&lt;/p&gt;


&lt;h4&gt;
  
  
  Kubernetes Auth Method
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Pod sends its mounted ServiceAccount token to Vault&lt;/li&gt;
&lt;li&gt;Vault validates it against the Kubernetes API&lt;/li&gt;
&lt;li&gt;If valid, Vault maps it to a policy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is simple and works well when Vault runs inside the cluster.&lt;/p&gt;


&lt;h4&gt;
  
  
  JWT Auth Method
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Vault verifies the JWT itself (signature, claims, expiration)&lt;/li&gt;
&lt;li&gt;No need for Kubernetes API access&lt;/li&gt;
&lt;li&gt;More portable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Rule of thumb:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;strong&gt;Kubernetes&lt;/strong&gt; if Vault runs inside your cluster and simplicity matters&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;JWT&lt;/strong&gt; if you want portability, stronger boundaries, and flexibility&lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;
  
  
  Projected Tokens: Because It's 2025
&lt;/h3&gt;

&lt;p&gt;Old tokens were static and long-lived — exactly what we were looking at here. As Martin pointed out during the investigation, projected tokens are designed to fix this entire class of problems.&lt;/p&gt;

&lt;p&gt;Instead of mounting a one-year token into every pod, Kubernetes can now generate &lt;strong&gt;short-lived, audience-bound tokens on demand.&lt;/strong&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  What You Get
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Short TTL (e.g. 10 minutes)&lt;/li&gt;
&lt;li&gt;Audience restrictions (&lt;code&gt;aud: vault&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Automatic rotation by &lt;code&gt;kubelet&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;No automatic mounting into pods&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Example Pod with Projected Token
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;projected-token-test-pod&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;serviceAccountName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;projected-auth-sa&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;projected-auth-test&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo/vault-curl:latest&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sleep"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3600"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;token&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/run/secrets/projected&lt;/span&gt;
          &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;token&lt;/span&gt;
      &lt;span class="na"&gt;projected&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;sources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;serviceAccountToken&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;token&lt;/span&gt;
              &lt;span class="na"&gt;expirationSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;600&lt;/span&gt;
              &lt;span class="na"&gt;audience&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vault&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Why Vault Loves This
&lt;/h3&gt;

&lt;p&gt;Vault's JWT auth method is tailor-made for projected tokens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It parses and verifies the JWT signature (via a configured PEM key or JWKS endpoint)&lt;/li&gt;
&lt;li&gt;Validates all claims (&lt;code&gt;aud&lt;/code&gt;, &lt;code&gt;sub&lt;/code&gt;, &lt;code&gt;exp&lt;/code&gt;, &lt;code&gt;iss&lt;/code&gt;) locally&lt;/li&gt;
&lt;li&gt;Issues secrets only if every check passes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Minimal dependencies. Strong claim validation. Secure, verifiable checks.&lt;/p&gt;


&lt;h3&gt;
  
  
  Back to the Log
&lt;/h3&gt;

&lt;p&gt;Imagine you stumble upon this in a Go app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Auth Token:"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Old world:&lt;/strong&gt; a one-year, cluster-wide token with no audience. A time bomb.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;New world:&lt;/strong&gt; a 10-minute token, scoped to Vault, rotating automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's still bad to log tokens — but at least it's not catastrophic.&lt;/p&gt;




&lt;h3&gt;
  
  
  Try It Yourself: Vault + K8s AuthN Lab
&lt;/h3&gt;

&lt;p&gt;I've built a hands-on demo repo where you can test this locally with KIND (Kubernetes in Docker) and Vault Helm charts:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/VincentvonBueren/erfa-projected-sa-token" rel="noopener noreferrer"&gt;GitHub: VincentvonBueren/erfa-projected-sa-token&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  What's Inside
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;KIND cluster with Vault&lt;/li&gt;
&lt;li&gt;Both Kubernetes and JWT auth methods enabled&lt;/li&gt;
&lt;li&gt;Vault policies and roles&lt;/li&gt;
&lt;li&gt;Four demo pods:

&lt;ul&gt;
&lt;li&gt;Kubernetes auth method&lt;/li&gt;
&lt;li&gt;JWT with static token&lt;/li&gt;
&lt;li&gt;JWT with projected token&lt;/li&gt;
&lt;li&gt;JWT with wrong audience (failure demo)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h3&gt;
  
  
  Final Drop 🎤
&lt;/h3&gt;

&lt;p&gt;If your pods still run with default, long-lived tokens:&lt;br&gt;
you’re one debug log away from giving away the keys to your cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Projected tokens aren't optional. They're essential.&lt;/strong&gt;&lt;br&gt;
Adopt them today — and stop shipping security disasters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Acknowledgment
&lt;/h3&gt;

&lt;p&gt;The discovery of the exposed ServiceAccount token — and the push towards using projected tokens — came from my dear fellow engineer Martin Odermatt, whose input significantly shaped this investigation and motivated me to tell this story.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>serviceaccount</category>
      <category>security</category>
      <category>jwt</category>
    </item>
    <item>
      <title>Dissecting Kubewarden: Internals, How It's Built, and Its Place Among Policy Engines</title>
      <dc:creator>Jakob Beckmann</dc:creator>
      <pubDate>Mon, 07 Jul 2025 05:31:30 +0000</pubDate>
      <link>https://forem.com/ipt/dissecting-kubewarden-internals-how-its-built-and-its-place-among-policy-engines-57gf</link>
      <guid>https://forem.com/ipt/dissecting-kubewarden-internals-how-its-built-and-its-place-among-policy-engines-57gf</guid>
      <description>&lt;p&gt;Kubernetes offers amazing capabilities to improve compute density compared to older runtimes such as virtual machines. However, in oder to leverage the capabilities of the platform, these tend to host applications from various tenants. This introduces a strong need for properly crafted controls and well-defined compliance to ensure the tenants use the platform correctly and do not affect one another. The RBAC capabilities provided out of the box by Kubernetes are quickly insufficient to address this need. This is where policy engines such as &lt;a href="https://www.kubewarden.io/" rel="noopener noreferrer"&gt;Kubewarden&lt;/a&gt; come into play. In this post we will look at how Kubewarden can be leveraged to ensure correct usage of a platform, how it compares to other policy engines, and how to best adopt it.&lt;/p&gt;

&lt;h1&gt;
  
  
  Policy Engines
&lt;/h1&gt;

&lt;p&gt;Kubernetes provides role-based access control (RBAC) out of the box to control what actions can be performed against the Kubernetes API. Generally, RBAC works by assigning sets of roles to users or groups of users. Capabilities are attached to these roles, and users having a role obtain these capabilities. This simple mechanism is very powerful, mostly because it is quite flexible while allowing a simple overview of a user's capabilities. However, in the case of Kubernetes, the definition of capabilities is very restricted. Roles only allow or deny access to Kuberenetes API endpoints, but to not allow control based on payload content. This means that these capabilities are mostly restricted to CRUD operations on Kubernetes primitives (e.g. &lt;code&gt;Deployments&lt;/code&gt;, &lt;code&gt;Ingresses&lt;/code&gt;, or custom resources). Unfortunately, this is often not enough.&lt;/p&gt;

&lt;p&gt;For instance, it is quite common to allow users to perform actions on some primitives under specific conditions. An example would be that creating &lt;code&gt;Deployments&lt;/code&gt;s is only allowed as long as its name follows some convention and the pods its creates are not privileged and set proper resource requests/limits. The naming convention cannot be enforced by standard RBAC controls as these have no possibility to represent more complex logic. Controlling the configuration of the pods created by a &lt;code&gt;Deployment&lt;/code&gt; is a validation of the payload pushed to the API, and is thus not supported either.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Security contexts and resources on pods can be controlled via methods such as Security Context Constraints or Pod Security Policies and ResourceQuotas. However, these do not reject the creation of the deployment, but will only block the creation of the pods themselves. It is therefore possible to apply a Deployment that is known to not allow the creation of pods. In my personal opinion this is not ideal, as it does not fail early.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;These scenarios is where policy engines come into play. They utilise Kubernetes' Dynamic Access Control mechanisms to enable cluster administrators to manage permissions using more complex logic. The exact capabilities of policy engines can vary greatly as these are essentially arbitrary software that validates or mutates Kubernetes requests. However, the majority of major policy engines work similarly. They tend to implement the operator pattern, enabling the configuration of policies using Kubernetes custom resources. In this blog post we will have a look at Kubewarden in more detail, and how it compares to other engines.&lt;/p&gt;

&lt;h1&gt;
  
  
  Kubewarden Architecture
&lt;/h1&gt;

&lt;p&gt;Kubewarden leverages &lt;a href="https://webassembly.org/" rel="noopener noreferrer"&gt;WebAssembly (WASM)&lt;/a&gt; to enable extremely flexible policy evaluation. Essentially, Kubewarden can be seen as a WASM module orchestrator where policies are deployed as serverless functions that get called when necessary. The result of these WASM functions then determines whether an API request against Kubernetes is allowed, denied, or altered (mutated).&lt;/p&gt;

&lt;p&gt;This similee can also help explain Kubewarden's architecture. Essentially, the Kubewarden controller (operator) manages policy servers and admission policies. Policy servers can be seen as hosts for the serverless execution of functions, whereas admissions policies are the functions themselves. Therefore, in order to perform policy validation, one needs at least one policy server running to host the policies one wants to enforce. The controller then takes care of configuring the runtime (policy server) to properly run the adequate policy executable with the appropriate inputs when a policy needs to be evaluated. The diagram below illustrates this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7x0xmdqg4ql6gmp3te9h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7x0xmdqg4ql6gmp3te9h.png" alt="A policy server's internal architecture" width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As policies are WASM modules, they can themselves support configuration. This makes policy reuse a major feature of Kubewarden. Complex logic can be contained in the WASM module while exposing some tuning as configuration, allowing a policy to perform a relatively generic task. To understand this better, let us have a look at such a policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;policies.kubewarden.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterAdmissionPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cel-policy-replica-example"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;module&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry://ghcr.io/kubewarden/policies/cel-policy:v1.0.0&lt;/span&gt;
  &lt;span class="na"&gt;backgroundAudit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;protect&lt;/span&gt;
  &lt;span class="na"&gt;mutating&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;policyServer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;apps"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;apiVersions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;v1"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;operations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CREATE"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;UPDATE"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;deployments"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;settings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;replicas"&lt;/span&gt;
        &lt;span class="na"&gt;expression&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;object.spec.replicas"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;maxreplicas&lt;/span&gt;
        &lt;span class="na"&gt;expression&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;int(5)&lt;/span&gt;
    &lt;span class="na"&gt;validations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;expression&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;variables.replicas&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;lt;=&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;variables.maxreplicas"&lt;/span&gt;
        &lt;span class="na"&gt;messageExpression&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'the&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;number&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;of&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;replicas&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;must&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;be&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;less&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;than&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;or&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;equal&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;+&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;string(variables.maxreplicas)"&lt;/span&gt;
  &lt;span class="na"&gt;namespaceSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, we are using a WASM module which evaluates a &lt;a href="https://cel.dev/" rel="noopener noreferrer"&gt;Common Expression Language (CEL)&lt;/a&gt; expression to define our policy. Evaluating a CEL expression is not something we want to implement every time ourselves. Thankfully, Kubewarden provides this as a WASM module on their &lt;a href="https://artifacthub.io/packages/search?kind=13&amp;amp;sort=relevance&amp;amp;page=1" rel="noopener noreferrer"&gt;ArtefactHub&lt;/a&gt;. Thus we do not need to implement anything and can reuse that module. It is referenced on the &lt;code&gt;module&lt;/code&gt; line above. Of course we also need to actually define the CEL expression that should be the heart of the policy rule. This is done within the &lt;code&gt;settings&lt;/code&gt; block. Note how we can use object internals (such as replicas defined in a &lt;code&gt;Deployment&lt;/code&gt;) in the validation expression. Finally, we need to define on what objects this policy should be evaluated. In order to do this, we provide &lt;code&gt;rules&lt;/code&gt; that tell Kubewarden on what Kubernetes API endpoints to trigger the policy, and additionally provide information about which namespaces should be affected by the policy with a &lt;code&gt;namespaceSelector&lt;/code&gt;. The remaining options configure the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;backgroundAudit&lt;/code&gt;: informs Kubewarden to report on this policy for objects that are already deployed. In this case, we validate the replicas on created or updated &lt;code&gt;Deployment&lt;/code&gt; objects. However, there might already be &lt;code&gt;Deployments&lt;/code&gt; on the cluster that violate the policy before we start enforcing it. This option will tell Kubewarden to provide reports on such violations.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;mode&lt;/code&gt;: Kubewarden supports enforcing policies (in &lt;code&gt;protect&lt;/code&gt; mode), or monitoring the cluster (in &lt;code&gt;monitor&lt;/code&gt; mode). Using the &lt;code&gt;monitor&lt;/code&gt; mode can be interesting when investigating how people use the Kubernetes cluster or providing them with warnings before enforcing policies.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;mutating&lt;/code&gt;: policies can also mutate (change) requests. In this case we are only performing validation to potentially reject requests. Thus we set &lt;code&gt;mutating&lt;/code&gt; to &lt;code&gt;false&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;policyServer&lt;/code&gt;: as explained above, Kubewarden can manage many policy servers. This simply informs
the controller on which policy server this specific policy should be deployed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As one can see based on the sample policy above, while Kubewarden technically uses programs as policies, it is usually not necessary to write any code to use Kubewarden. This is thanks to its strong focus on module configuration and re-usability. The above CEL module alone already enables the configuration of a very wide range of policies. On top of that, other modules shared on ArtefactHub provide more specific validations or mutations that might incorporate more complex logic. If this is not enough, policy groups (a feature we will not cover in this post) can be utilised to combine other policies and express more complex logic as well. Finally, if one has very specific needs that cannot be addressed by any of the publicly shared modules, one can still fall back to writing code and building ones own module with fully arbitrary logic. How such policies can be written, in actual code, might follow in a separate blog post.&lt;/p&gt;

&lt;p&gt;The above architecture of Kubewarden is what makes it stand apart from most other policy engines. Generally policy engines contain the logic fully in the controller, only exposing configuration via the custom resource. Since Kubewarden can essentially execute arbitrary WASM bytecode, it is not bound by the expressiveness of the custom resource declaration.&lt;/p&gt;

&lt;p&gt;All this considered, is Kubewarden the best choice for a policy engine and should be used in all scenarios?&lt;/p&gt;

&lt;h1&gt;
  
  
  Comparison
&lt;/h1&gt;

&lt;p&gt;There are many other policy engines out there, such as &lt;a href="https://kyverno.io/" rel="noopener noreferrer"&gt;Kyverno&lt;/a&gt;, &lt;a href="https://open-policy-agent.github.io/gatekeeper/website/" rel="noopener noreferrer"&gt;Gatekeeper&lt;/a&gt;, or &lt;a href="https://www.fairwinds.com/polaris" rel="noopener noreferrer"&gt;Polaris&lt;/a&gt;. So why would you choose Kubewarden over any other?&lt;/p&gt;

&lt;p&gt;As explained above, Kubewarden provides unprecedented flexibility, thanks to the way it evaluates its policies. This has the massive advantage that you will never reach a point that you have a policy that you would like to enforce but are restricted by the policy engine itself. However, it also has some drawbacks. The primary one being complexity. Writing WASM modules is not for the fainthearted, as WebAssembly is not yet incredibly mature, and most developers will not be familiar with it. The complexity issue can however be sidestepped as the vast majority of policies can be expressed using off-the-shelf WASM modules provided by Kubewarden.&lt;/p&gt;

&lt;p&gt;Another aspect that often needs to be considered in enterprise contexts, is support. Kubewarden is an open source project that is loosely backed by SUSE (as it was originally developer for its Rancher offering). Thus enterprise support is only available via a SUSE Rancher Prime. Other tools such as Kyverno are not only more mature, but offer more flexible enterprise support (via Isovalent).&lt;/p&gt;

&lt;p&gt;Finally, another aspect to consider is the featureset of a policy engines. Not all policy engines support mutating requests, and are thus much more restricted in their use. However, in this category Kubewarden offers all the features typically desired from policy engines. Some engines such as Kyverno support more features such as synchronizing &lt;code&gt;Secret&lt;/code&gt; objects. While this can be useful, it is, in my humble opinion, not a feature for a policy engine.&lt;/p&gt;

&lt;p&gt;Of course, there are also personal preference aspects to consider. As an example, Kubewarden and Kyverno handle policy exceptions very differently. Kubewarden has matchers that can be defined as part of the policy itself, which allow to exclude some resources from being validated. Kyverno on the other hand uses a separate CRD called &lt;a href="https://kyverno.io/docs/exceptions/" rel="noopener noreferrer"&gt;&lt;code&gt;PolicyException&lt;/code&gt;&lt;/a&gt;. Both have advantages and disadvantages.&lt;/p&gt;

&lt;h1&gt;
  
  
  Verdict
&lt;/h1&gt;

&lt;p&gt;Kubewarden is a very interesting piece of software. Its internal architecture enables it to be incredibly flexible, at the cost of complexity. However, due to a smart concept of WebAssembly module re-use, that complexity is mostly under the hood, unless one wants or needs to dive deep. In my opinion, Kubewarden can be an absolutely great consideration when ones operates very large Kubernetes clusters what might have quite exceptional requirements. However, even in these cases, I would recommend starting very slow, and slowly building up to the complexity Kubewarden can hold in store.&lt;/p&gt;

&lt;p&gt;If you do not operate a large Kubernetes fleet, or expect to have rather standard requirements in terms of how you want to restrict access to you cluster(s), you might be better off with more mature and simpler tools like Kyverno. Getting support for these tools is likely to also be much simpler.&lt;/p&gt;

&lt;p&gt;A large part of the complexity of Kubewarden also comes from all that is required to even run this in an enterprise context. Unless you allow pulling WASM modules directly from the internet, you will also need a registry to host OCI packaged modules. On top of that, should you decide to write your own modules, you will need a process to do this, and build knowhow in that area. These are some of the aspects I hope to cover in a follow up post.&lt;/p&gt;

</description>
      <category>sre</category>
      <category>kubernetes</category>
      <category>security</category>
    </item>
    <item>
      <title>Your Cluster Deserves Better Traffic Management. Enter: Gateway API.</title>
      <dc:creator>Ignacio de los Rios</dc:creator>
      <pubDate>Thu, 05 Jun 2025 09:07:17 +0000</pubDate>
      <link>https://forem.com/ipt/your-cluster-deserves-better-traffic-management-enter-gateway-api-3ago</link>
      <guid>https://forem.com/ipt/your-cluster-deserves-better-traffic-management-enter-gateway-api-3ago</guid>
      <description>&lt;p&gt;A Kubernetes cluster is like a beautifully designed city—with excellent plumbing, stable buildings, and all essential services—but no roads leading in or out. No visitors, no deliveries, no exits. If you want your application or services to interact with the outside world (say, users), you’ll need to pave some highways. Kubernetes has traditionally offered a few ways to do this—some better than others. Now, there’s a new infrastructure project in town: the Gateway API—and it's here to stay.&lt;br&gt;
In this post, we’ll explore Kubernetes’ latest external access mechanism and show how it integrates with Cert-Manager to handle certificates. But before diving into this new world, let’s briefly review the most common ways to expose services in Kubernetes.&lt;/p&gt;
&lt;h2&gt;
  
  
  Current Alternatives
&lt;/h2&gt;
&lt;h3&gt;
  
  
  NodePort: Exposing a Service on a Node’s Port
&lt;/h3&gt;

&lt;p&gt;A NodePort service is the most basic way to expose a service externally. Kubernetes opens a specific port on each node, and any traffic to a node’s IP on that port is forwarded to the service inside the cluster. Simple, but with notable limitations.&lt;/p&gt;
&lt;h3&gt;
  
  
  LoadBalancer: External Load Balancers for Services
&lt;/h3&gt;

&lt;p&gt;A LoadBalancer service goes a step further by integrating with external load balancing infrastructure. When you create a Service of type LoadBalancer, Kubernetes asks the cloud provider to provision an external load balancer (e.g., AWS ELB, GCP Load Balancer, Azure Load Balancer). The service receives an external IP or hostname that forwards traffic to the backing Pods.&lt;br&gt;
This works well in managed cloud environments but scales poorly and doesn’t work out of the box for bare-metal or on-prem clusters—unless you add a software load balancer like MetalLB.&lt;/p&gt;
&lt;h3&gt;
  
  
  Ingress/Routes: Layer-7 Routing for HTTP/S
&lt;/h3&gt;

&lt;p&gt;Ingress (or Route in OpenShift) is a separate API object used for external access at Layer 7 (HTTP/S). With Ingress, you define routing rules—for example, "send requests for api.example.com to Service A, and &lt;a href="http://www.example.com" rel="noopener noreferrer"&gt;www.example.com&lt;/a&gt; to Service B."&lt;br&gt;
An Ingress Controller implements these rules, typically using a proxy or cloud load balancer. However, Ingress only supports HTTP(S) and lacks native support for TCP or other protocols.&lt;/p&gt;
&lt;h2&gt;
  
  
  From Ingress to Gateway API: Why a New API?
&lt;/h2&gt;

&lt;p&gt;The Gateway API is an evolving standard—now an official Kubernetes SIG project—designed to address the limitations of Ingress and support more advanced traffic management. If you're unfamiliar with the term “SIG” it stands for Special Interest Group: volunteer teams that manage and maintain specific areas of the Kubernetes ecosystem (see the project home &lt;a href="https://github.com/kubernetes-sigs" rel="noopener noreferrer"&gt;here&lt;/a&gt;). Think of the Gateway API as a modern urban plan for our Kubernetes city — with zoning laws, dedicated roads, and clearly defined roles.&lt;/p&gt;

&lt;p&gt;Unlike Ingress, Gateway API is more than a single traffic cop signaling HTTP cars through an intersection. It is an expressway system that speaks multiple protocols—HTTP, HTTPS, TCP, TLS, even UDP—while offering built-in traffic tricks such as path and header rewrites, query-parameter matching, and weighted routing for canary releases - all without using vendor-specific annotations. Crucially, it separates concerns: platform teams lay down GatewayClasses and Gateways (the asphalt), while application teams own their Routes (the traffic signs). Because the spec is vendor-neutral, the very same YAML can drive Istio, NGINX or Cilium, meaning your road network is portable as your underlying architecture evolves.&lt;/p&gt;

&lt;p&gt;Three of the most important Resources introduced by GatewayAPI are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GatewayClass – urban planners&lt;/li&gt;
&lt;li&gt;Gateways – the roads and entry points&lt;/li&gt;
&lt;li&gt;Routes – detailed traffic control rules&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are implemented as Kubernetes Custom Resource Definitions (CRDs), providing flexibility, scalability, and separation of concerns. Let's dive into each of them.&lt;/p&gt;
&lt;h3&gt;
  
  
  GatewayClass
&lt;/h3&gt;

&lt;p&gt;A GatewayClass defines a category of Gateways and is managed cluster-wide. It encapsulates the controller responsible for implementing the associated gateways.&lt;br&gt;
This is similar in concept to StorageClass in Kubernetes: administrators define them, and users reference them.&lt;br&gt;
Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GatewayClass&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;istio&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;controllerName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;istio.io/gateway-controller&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, any Gateway with gatewayClassName: istio will be handled by the Istio controller. It applies similar for other controllers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gateway
&lt;/h3&gt;

&lt;p&gt;A Gateway is an instance of ingress infrastructure—an actual network entry point. It references a GatewayClass and defines one or more listeners, each specifying a protocol (HTTP, HTTPS, TCP), port, and optional hostname.&lt;br&gt;
Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Gateway&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-gateway&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;istio-ingress&lt;/span&gt;  &lt;span class="c1"&gt;# Gateways are namespaced&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;gatewayClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;istio&lt;/span&gt;   &lt;span class="c1"&gt;# must match a GatewayClass&lt;/span&gt;
  &lt;span class="na"&gt;listeners&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;            &lt;span class="c1"&gt;# an arbitrary name for the listener&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*.example.com"&lt;/span&gt;
      &lt;span class="na"&gt;allowedRoutes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;namespaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;All&lt;/span&gt;         &lt;span class="c1"&gt;# allow Routes from any namespace to bind (could be restricted in production)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Routes: The Core of Gateway API’s Flexibility
&lt;/h3&gt;

&lt;p&gt;Here’s where the Gateway API truly shines—dedicated, protocol-specific route types like HTTPRoute, TCPRoute, and TLSRoute. These give you granular control over traffic, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Matching by host, path, headers, or query parameters&lt;/li&gt;
&lt;li&gt;Routing to multiple services with weighted distribution (for canary or A/B testing)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of Routes as traffic signs guiding vehicles through the city. They keep everything flowing smoothly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTPRoute&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp-route&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hostnames&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;myapp.example.com"&lt;/span&gt;
  &lt;span class="na"&gt;parentRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-gateway&lt;/span&gt;
      &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;istio-ingress&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PathPrefix&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/"&lt;/span&gt;
      &lt;span class="na"&gt;backendRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp-service&lt;/span&gt;
          &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example highlights from an HTTPRoute:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;parentRefs: Attach the route to a specific Gateway and listener&lt;/li&gt;
&lt;li&gt;hostnames: Match specific domains (e.g., myapp.example.com)&lt;/li&gt;
&lt;li&gt;rules: Define matching logic and forwarding behavior&lt;/li&gt;
&lt;li&gt;matches: Conditions like path prefix or headers&lt;/li&gt;
&lt;li&gt;backendRefs: Destination services, possibly weighted&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Earlier, we talked about one of the key advantages of the Gateway API over Ingress: native support for advanced traffic management—or what we called “traffic tricks.” A concrete example of this comes in the form of on-object filters. For instance, if you need to strip /v1 from all incoming URLs, here’s how you can do it using Gateway’s built-in URL rewrite filter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PathPrefix&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/v1&lt;/span&gt;
    &lt;span class="na"&gt;filters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;URLRewrite&lt;/span&gt;
      &lt;span class="na"&gt;urlRewrite&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;replacePrefixMatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
    &lt;span class="na"&gt;backendRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp-service&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No annotations, no sidecars—just one CRD doing the job.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gateway Controllers and Implementations
&lt;/h3&gt;

&lt;p&gt;Gateway API resources (like Gateway, HTTPRoute, and GatewayClass) are declarative. To actually route traffic, you need a Gateway Controller—a component that reads these resources and configures the data plane (Envoy, NGINX, etc.).&lt;br&gt;
Supported controllers include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service meshes like Istio and Linkerd (Istio is moving to Gateway API by default)&lt;/li&gt;
&lt;li&gt;Ingress controllers such as Contour, NGINX, Traefik, Kong (adding Gateway support)&lt;/li&gt;
&lt;li&gt;Cloud providers (GKE, AWS, Azure) that map Gateway resources to native load balancers&lt;/li&gt;
&lt;li&gt;Other projects like Cilium and Gloo offering enhanced networking&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Managing TLS Certificates with Gateway API
&lt;/h2&gt;

&lt;p&gt;Let’s be honest: rotating and renewing TLS certificates remains one of the most thankless tasks in day-to-day ops. Exposing HTTP apps to the internet requires HTTPS, and thus, TLS certificates. In the Ingress world, cert-manager automates this via annotations and certificate blocks. Fortunately, Gateway API is supported too.&lt;br&gt;
To automate TLS certificate issuance with cert-manager and Gateway API:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ensure cert-manager and Gateway API CRDs are installed&lt;/li&gt;
&lt;li&gt;Configure a ClusterIssuer or Issuer (e.g., Let’s Encrypt)&lt;/li&gt;
&lt;li&gt;Annotate your Gateway to trigger cert-manager
Example:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Gateway&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-gateway&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;istio-ingress&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cert-manager.io/issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;letsencrypt-prod"&lt;/span&gt;  &lt;span class="c1"&gt;# Reference to a cert-manager Issuer or ClusterIssuer&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;gatewayClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;istio&lt;/span&gt;
  &lt;span class="na"&gt;listeners&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTPS&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;443&lt;/span&gt;
      &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;myapp.example.com"&lt;/span&gt;
      &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terminate&lt;/span&gt;
        &lt;span class="na"&gt;certificateRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp-example-com-tls&lt;/span&gt;  &lt;span class="c1"&gt;# Secret that will hold the TLS cert and key&lt;/span&gt;
      &lt;span class="na"&gt;allowedRoutes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;namespaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;All&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cert-manager will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Detect the need for a certificate&lt;/li&gt;
&lt;li&gt;Create a Certificate resource behind the scenes&lt;/li&gt;
&lt;li&gt;Complete the ACME challenge (if using Let’s Encrypt)&lt;/li&gt;
&lt;li&gt;Store the key and cert in the referenced Secret&lt;/li&gt;
&lt;li&gt;Keep the certificate renewed automatically&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This works much like how cert-manager integrates with Ingress—only now, it's applied to Gateway. This seamless integration with cert-manager is key to automating secrets management for external access and eliminates the need for hacky workarounds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Kubernetes offers several ways to expose services—from the basic NodePort to cloud-managed LoadBalancer and flexible Ingress. The Gateway API builds on those lessons and provides a modern, scalable way to manage external access with better separation of infrastructure and application concerns.&lt;br&gt;
We explored how GatewayClass, Gateway, and Routes work together, and how Gateway controllers like Istio implement them. On the secrets side, tools like cert-manager integrate seamlessly to automate HTTPS. For broader secrets management, tools like HashiCorp Vault are an excellent addition — perhaps a topic for another blog. 😊&lt;br&gt;
Kubernetes has evolved from a small village into a thriving metropolis. And just like any modern city, it needs robust, scalable infrastructure to manage how traffic flows and how identities are verified at its gates. The Gateway API is that next-generation road system—built with urban planning in mind. The pavement is poured; now it’s time to open the on-ramps and let your applications cruise.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Tortoise and the Hare: do AI Agents Really Help for Software Development?</title>
      <dc:creator>Jakob Beckmann</dc:creator>
      <pubDate>Wed, 23 Apr 2025 05:37:56 +0000</pubDate>
      <link>https://forem.com/ipt/the-tortoise-and-the-hare-do-ai-agents-really-help-for-software-development-3fo4</link>
      <guid>https://forem.com/ipt/the-tortoise-and-the-hare-do-ai-agents-really-help-for-software-development-3fo4</guid>
      <description>&lt;p&gt;Making my development workflow as fast as possible is a big passion of mine. From customizing my development setup to get the last inkling of efficiency out of it, to thinking how to manage notes and knowledge resources to access them as quickly as possible. With the sudden ubiquity of AI in development tools, I came to wonder how AI could help me write code faster. Being quite the skeptic when it comes to AI actually generating code for me (using tools such as Cursor or GitHub Copilot), I came to investigate AI agents which specialise in code reviews. In this blog post I will share my experience using such an agent on a real world case. I will explore where such agents shine and where they are severely lacking.&lt;/p&gt;

&lt;h1&gt;
  
  
  I am an AI Skeptic
&lt;/h1&gt;

&lt;p&gt;Generally I am not fond of using AI to develop software. My background is mostly in systems software, where correctness of the software can be critical. This means that using tooling that is non-deterministic and might not produce adequate results makes me uneasy. Furthermore, even if AI were to produce amazing results, a developer relying on it could quickly lose understanding of the code. This results in skill atrophy and large risks if the AI reaches the limits of its capabilities. In other words, I am not keen on having any AI generating code for me on a large scale for anything more than a proof of concept or low risk project.&lt;/p&gt;

&lt;p&gt;Nonetheless, one would be foolish to ignore AI's capabilities when it comes to developer tooling.&lt;/p&gt;

&lt;h1&gt;
  
  
  AI Support Agents
&lt;/h1&gt;

&lt;p&gt;Thus starts my journey investigating AI agents that can support me in the software development lifecycle, but whose main use is &lt;em&gt;not&lt;/em&gt; to generate code. Many such agents exist, mostly focusing on reviewing code. I am quite the fan of such a use case, as the AI essentially plays the role of another developer I might work with. It reviews my code, provides feedback, suggestions, and potentially even improvements. It however does this immediately after I have opened a pull request, rather than having to wait for days or weeks on a human review.&lt;/p&gt;

&lt;p&gt;How is this different from using an AI that generates code you might ask? The main difference lies in the fact that I still have to think on how to solve the problem I am working on, and provide a base solution. This forces me to understand the issue at hand. Thus, I am much better prepared to accept or reject any suggestions from an AI than if the AI just generated a first solution for me. Moreover, people (myself included) tend to be slightly defensive about the code they write. Thus I will, in all likelihood, only accept AI generated code improvements if it offers a real improvement, rather than blindly incorporating them into the codebase.&lt;/p&gt;

&lt;p&gt;All in all, it is extremely unlikely that I will lose understanding of the codebase or have my problem solving skills atrophy, but I can iterate on reviews much faster.&lt;/p&gt;

&lt;h1&gt;
  
  
  CodeRabbitAI
&lt;/h1&gt;

&lt;p&gt;In order to gain first experiences with such an AI agent, I chose to try out &lt;a href="https://www.coderabbit.ai/" rel="noopener noreferrer"&gt;CodeRabbitAI&lt;/a&gt;. This was not a thoroughly researched decision. The main reason I chose CodeRabbitAI is that I could try it out for free during 14 days and that it integrates well with GitHub. I am aware that performance between AI models varies greatly. However, CodeRabbitAI uses Claude under the hood, a model typically known to perform surprising well on programming tasks. I thus expect it to not perform significantly worse than any other state of the art model out there.&lt;/p&gt;

&lt;h1&gt;
  
  
  Starting Small
&lt;/h1&gt;

&lt;p&gt;In my opinion, such agents need to be tested on real world examples. One can see demos using AI to generate a dummy web app all over the place. However, common software projects are significantly larger, contain more complex logic, and are less standardized than these demos. Unfortunately, most software I work on professionally is not publicly available, so I cannot use CodeRabbitAI on these. I therefore picked two (still very small) personal projects of mine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;a href="https://github.com/f4z3r/gruvbox-material.nvim" rel="noopener noreferrer"&gt;NeoVim plugin&lt;/a&gt; providing a colour scheme.&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://github.com/f4z3r/sofa" rel="noopener noreferrer"&gt;command execution engine&lt;/a&gt; to run templated commands.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both projects are extremely small, with under two thousand lines of code. Both projects are written in Lua, a quite uncommon language. I wanted to see how the AI fares against something it is unlikely to have seen too much during its training.&lt;/p&gt;

&lt;p&gt;With that in mind, I wrote a &lt;a href="https://github.com/f4z3r/gruvbox-material.nvim/pull/40" rel="noopener noreferrer"&gt;first pull request&lt;/a&gt; implementing a fix in highlight groups for pop-up menus in NeoVim. I enabled CodeRabbitAI to summarize the PR for me.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53fewpgxrfwwxvqln031.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53fewpgxrfwwxvqln031.png" alt="Summary provided by CodeRabbitAI on my first PR"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The summary looks good, even though it somehow marks some fixes as features. This is especially intriguing as I use &lt;a href="https://www.conventionalcommits.org/en/v1.0.0/" rel="noopener noreferrer"&gt;conventional commits&lt;/a&gt; and explicitly marked these changes as fixes. Additionally, CodeRabbitAI offers a "walkthrough" of the changes made in the PR. In the case of such a simple PR, I found the walkthrough to be mostly confusing. In the case of larger PRs I can however see how this may be appealing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1i5blq1cnvnuqus2w729.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1i5blq1cnvnuqus2w729.png" alt="A walkthrough of the changes in the first PR"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In reality, I initially opened the PR with only the fixes the pop-up menus. I then pushed commits introducing the support for additional plugins later on. I would have expected CodeRabbitAI to complain that the new commits introduce changes unrelated to the PR, which is not seen as best practice. It did nothing of the sort.&lt;/p&gt;

&lt;p&gt;While the summary, walkthrough, and disregard for best practices were unsatisfying, one unexpected benefit emerged: the integration of linting feedback directly within the pull request comments. It provided nitpicks from linting tools (in this case &lt;a href="https://github.com/DavidAnson/markdownlint" rel="noopener noreferrer"&gt;&lt;code&gt;markdownlint&lt;/code&gt;&lt;/a&gt;. On one side, it is very disappointing to see that the AI agent did nothing more than lint the code and generate a nice comment out of the output. On the other hand it is quite nice that it introduces "quality gates" such as linting without me having to write a pipeline for it. Moreover, producing easily digestible output from a linter is nothing to be underestimated. The quality of life of having this directly as a comment rather than having to go through pipeline logs to read the raw linter output is quite nice. Is it worth two dozen USD per month? No, definitely not!&lt;/p&gt;

&lt;p&gt;On the upside, it did update the summary of the PR to reflect the other changes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jmypo9fwel98n9x97qp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jmypo9fwel98n9x97qp.png" alt="Updated summary provided by CodeRabbitAI on my first PR"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first PR was extremely trivial. It did not introduce any code containing logic. Other than not pointing out that it should probably have been two separate PRs, CodeRabbitAI fared as I would have expected another developer to have reviewed the PR. With two small differences:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The CodeRabbitAI review was close to &lt;strong&gt;immediate&lt;/strong&gt; (took around 30-60 seconds to run). This is amazing to iterate quickly.&lt;/li&gt;
&lt;li&gt;Where I would have expected a human reviewer to point our the nitpick or simply approve, CodeRabbitAI is extremely &lt;strong&gt;verbose&lt;/strong&gt; with explanations, walkthroughs, and so on. This in turn wastes time for the author, as he/she would need to read through this. The verbosity could be nicer on larger PRs, but for small concise PRs this is massive overkill and borderline annoying.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To further evaluate CodeRabbitAI's capabilities, I decided to test it on a pull request with more substantial changes...&lt;/p&gt;

&lt;h1&gt;
  
  
  A More Complex PR
&lt;/h1&gt;

&lt;p&gt;Armed with dampened expectations from my first PR, I opened &lt;a href="https://github.com/f4z3r/sofa/pull/3" rel="noopener noreferrer"&gt;another PR&lt;/a&gt; in the command execution repository implementing a feature affecting multiple files. These changes also update existing logic.&lt;/p&gt;

&lt;p&gt;In this second PR, CodeRabbitAI went above and beyond, and generated a walkthrough containing two sequence diagrams showcasing the control flow of the code that was modified! I was actually quite impressed by this. While probably not necessary for the author of a PR, this is great even only for documentation purposes. New team members with less experience may benefit from such visual aids to understand complex logic within the code. Unfortunately the diagrams didn't highlight the &lt;em&gt;specific modifications&lt;/em&gt; introduced by the pull request.&lt;/p&gt;

&lt;p&gt;However, the supporting text suddenly becomes more relevant when considering such PRs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvx0eypb4e1v5syzg0gre.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvx0eypb4e1v5syzg0gre.png" alt="One of the sequence diagrams generated by CodeRabbitAI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On top of that, CodeRabbitAI actually posted interesting comments. It found the odd nitpick here and there, but also found more meaningful potential issues. For instance, I modified a test configuration to use a different shell. CodeRabbitAI identified that this shell is not listed as a dependency anywhere in the repository, and that it would thus not work off-the-shelf. In this case this was only a test file used to parse the configuration and the configured shell did not affect anything, but this is a great finding generally.&lt;/p&gt;

&lt;p&gt;I also started conversing with CodeRabbitAI about some changes. Requesting it to give me a suggestion on some configurations. It managed just fine, but did not actually provide these as code suggestions that can be applied, but rather as code blocks in comments, which was a bit disappointing.&lt;/p&gt;

&lt;p&gt;Additionally, I decided to try to use CodeRabbitAI's commands feature. This enables ChatOps to control actions taken by CodeRabbitAI. I generated the PR title using one such command. The title turned out generic and not very informative. In CodeRabbitAI's defense, I am quite unsure how I would have named that PR.&lt;/p&gt;

&lt;p&gt;I then tried to get it to write docstrings for new functions that were introduced in the PR. It massively misunderstood the request, and created &lt;a href="https://github.com/f4z3r/sofa/pull/4" rel="noopener noreferrer"&gt;a PR adding docstrings to all functions&lt;/a&gt; in the affected files, even ones that already had docstrings... This goes to show that in some cases, it cannot even do what the most junior of all engineers would be capable of doing thanks to a even so tiny dose of common sense. Moreover, it started adding commits with emojis in the title. This goes to show that these AIs are probably not trained much on professional projects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kcqsm6xjkflyk5xpunq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kcqsm6xjkflyk5xpunq.png" alt="CodeRabbitAI not only breaking conventional commits but introducing emojis..."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that first disaster, with significantly less ambition, I requested it creates a PR to change a small typo. CodeRabbitAI informed me that it created a branch with the changes included, but that it was not capable of creating pull requests. This shocked me, considering it had created its first disaster PR no 10 minutes before.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwbeqdfikxqu0c5xndu3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwbeqdfikxqu0c5xndu3.png" alt="Fighting with CodeRabbitAI to fix my typo."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After another nudge, CodeRabbitAI however did &lt;a href="https://github.com/f4z3r/sofa/pull/5" rel="noopener noreferrer"&gt;create a PR&lt;/a&gt;. It targeted &lt;code&gt;main&lt;/code&gt; instead of the branch I was initially using. I guess this is my own fault though for not being specific enough.&lt;/p&gt;

&lt;p&gt;Finally, I also tried to get it to update the wording on a commit it did to use conventional commits. Unfortunately it seems that it only has access to the GitHub API and cannot execute any local &lt;code&gt;git&lt;/code&gt; commands. It is therefore not able to perform some relatively common operations in the SDLC that are not part of the GitHub API. However, I am guessing this is subject to change relatively soon with the emergence of technologies such as the &lt;a href="https://modelcontextprotocol.io/introduction" rel="noopener noreferrer"&gt;model context protocol&lt;/a&gt;, which would enable it to control external tools such as &lt;code&gt;git&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;All in all, I would say CodeRabbitAI did as I would have expected after the first PR. It corrected nitpicks and allowed me to perform some simple actions. Did it deliver a review of the same quality like a senior engineer familiar with the project would have? No. In fact, in order to test this I intentionally implemented a feature that was already present in the repository, while making a couple design decisions that go against most of what the rest of the repository does. CodeRabbitAI neither detected that the logic I was introducing was already present in the codebase, nor did it complain about the sub-optimal design decisions. This goes to show that such agents are still not capable replacing humans with nuanced understanding of the project's history and architectural principles, potentially leading to the introduction of redundant or suboptimal solutions.&lt;/p&gt;

&lt;h1&gt;
  
  
  Dashboards!
&lt;/h1&gt;

&lt;p&gt;Another feature of AI agents next to the reviews is the analytics capabilities that come with them. In my personal opinion, analytics are important to measure the impact the introduction of such tooling has on the software delivery. CodeRabbitAI provides a couple nice dashboards on how much it is being used, and what kind of errors it helped uncover.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F855vwq57tx742mawk65e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F855vwq57tx742mawk65e.png" alt="Activity dashboard showing engagment with CodeRabbitAI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flx3jz0kgur7y6iksl0yq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flx3jz0kgur7y6iksl0yq.png" alt="Dashboard showing overall adoption of CodeRabbitAI on the projects"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lay5x98wr9gyulpf91g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lay5x98wr9gyulpf91g.png" alt="Findings dashboard showing errors and suggestions by type"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I did not try out CodeRabbitAI for long enough to have any meaningful metrics, but I am confident that the capabilities provided are enough to get a decent understanding of the quality of adoption.&lt;/p&gt;

&lt;p&gt;Moreover, CodeRabbitAI supports reporting. This allows to generate reports based on natural language prompts that could be useful for product owners to get insights of changes made to the software over the course of a sprint.&lt;/p&gt;

&lt;h1&gt;
  
  
  Verdict
&lt;/h1&gt;

&lt;p&gt;While this whole article might seem like a slight rant against such tools, I would in fact wish I could use such tools at work. Not as a replacement for human reviewers, but as an addition to them. For instance, the quite verbose walkthroughs CodeRabbitAI provides can be a very helpful entrypoint to a human reviewer on larger PRs. Moreover, while the quality of the review is insufficient for projects where quality matters, having near instant feedback is amazing.&lt;/p&gt;

&lt;p&gt;Finally, as mentioned above, I believe one major selling point of such agents is in the way we humans interact with them. Even if the agent might do little more than execute linters or similar in the background, having the output of these tools in natural language directly as comments in the PRs is not to be underestimated. This is especially true in the age where more and more responsibility is being shifted to developers. With DevSecOps, developers have to understand and act upon the output of all kinds of tools. Presenting this output in a more understandable format, potentially enriched with explanations, can have a significant impact.&lt;/p&gt;

&lt;p&gt;Therefore, as a final word, I would actually encourage people to explore such agents to augment their workflow &lt;strong&gt;safely&lt;/strong&gt;, albeit with caution and a clear understanding of their &lt;strong&gt;limitations&lt;/strong&gt;.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/f4z3r" rel="noopener noreferrer"&gt;
        f4z3r
      &lt;/a&gt; / &lt;a href="https://github.com/f4z3r/gruvbox-material.nvim" rel="noopener noreferrer"&gt;
        gruvbox-material.nvim
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Material Gruvbox colorscheme for Neovim written in Lua
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div&gt;
&lt;p&gt;&lt;a href="https://github.com/f4z3r/gruvbox-material.nvim/archive/master.zip" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Ff4z3r%2Fgruvbox-material.nvim%2F.%2Fassets%2Flogo.png" alt="Gruvbox Material" width="25%"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Gruvbox Material&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/01033da302ce50a77b87423bdc412176b379a892529b9971db88153719c23fd4/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f636f6e7472696275746f72732d616e6f6e2f66347a33722f67727576626f782d6d6174657269616c2e6e76696d"&gt;&lt;img src="https://camo.githubusercontent.com/01033da302ce50a77b87423bdc412176b379a892529b9971db88153719c23fd4/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f636f6e7472696275746f72732d616e6f6e2f66347a33722f67727576626f782d6d6174657269616c2e6e76696d" alt="GitHub contributors"&gt;&lt;/a&gt;
&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/a19974e4ca72dabfea4e7782cf9b91efc495c71ad9b4d00890f76e2b28cd6cfe/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f6c6173742d636f6d6d69742f66347a33722f67727576626f782d6d6174657269616c2e6e76696d"&gt;&lt;img src="https://camo.githubusercontent.com/a19974e4ca72dabfea4e7782cf9b91efc495c71ad9b4d00890f76e2b28cd6cfe/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f6c6173742d636f6d6d69742f66347a33722f67727576626f782d6d6174657269616c2e6e76696d" alt="GitHub last commit"&gt;&lt;/a&gt;
&lt;a href="https://repology.org/project/vim%3Agruvbox-material.nvim/versions" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/e3fe60bcdb4a06eff8f6d203c67e4bd018cc7834c2e73bc75f6f1326c5434e05/68747470733a2f2f7265706f6c6f67792e6f72672f62616467652f76657273696f6e2d666f722d7265706f2f6e69785f737461626c655f32355f30352f76696d25334167727576626f782d6d6174657269616c2e6e76696d2e737667" alt="nixpkgs stable 25.05 package"&gt;&lt;/a&gt;
&lt;a href="https://repology.org/project/vim%3Agruvbox-material.nvim/versions" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/6ba0ae57d5825b36c63ac82e6c84b146e3b921f3004db6bdc7e14882b1a106ca/68747470733a2f2f7265706f6c6f67792e6f72672f62616467652f76657273696f6e2d666f722d7265706f2f6e69785f756e737461626c652f76696d25334167727576626f782d6d6174657269616c2e6e76696d2e737667" alt="nixpkgs unstable package"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;A NeoVim colour scheme in pure Lua allowing for highly flexible configuration and customization.&lt;/h3&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="https://github.com/f4z3r/gruvbox-material.nvim#features" rel="noopener noreferrer"&gt;Features&lt;/a&gt; |
&lt;a href="https://github.com/f4z3r/gruvbox-material.nvim#installation" rel="noopener noreferrer"&gt;Installation&lt;/a&gt; |
&lt;a href="https://github.com/f4z3r/gruvbox-material.nvim#usage-and-configuration" rel="noopener noreferrer"&gt;Usage and Configuration&lt;/a&gt; |
&lt;a href="https://github.com/f4z3r/gruvbox-material.nvim/./docs/api.md" rel="noopener noreferrer"&gt;API Reference&lt;/a&gt;&lt;/p&gt;

&lt;/div&gt;
&lt;div class="markdown-alert markdown-alert-note"&gt;
&lt;p class="markdown-alert-title"&gt;Note&lt;/p&gt;
&lt;p&gt;This is a continuation of the original work from WittyJudge
&lt;a href="https://github.com/WIttyJudge/gruvbox-material.nvim" rel="noopener noreferrer"&gt;https://github.com/WIttyJudge/gruvbox-material.nvim&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;A port of &lt;a href="https://github.com/sainnhe/gruvbox-material" rel="noopener noreferrer"&gt;gruvbox-material&lt;/a&gt; colorscheme for Neovim
written in Lua. It does not aim to be 100% compatible with the mentioned repository, but rather
focuses on keeping the existing scheme stable and to support popular plugins. This colorscheme
supports both &lt;code&gt;dark&lt;/code&gt; and &lt;code&gt;light&lt;/code&gt; themes, based on configured background, and harder or softer
contrasts.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Dark theme:&lt;/strong&gt;
&lt;a rel="noopener noreferrer" href="https://github.com/f4z3r/gruvbox-material.nvim/./assets/dark-medium.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Ff4z3r%2Fgruvbox-material.nvim%2F.%2Fassets%2Fdark-medium.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Light theme:&lt;/strong&gt;
&lt;a rel="noopener noreferrer" href="https://github.com/f4z3r/gruvbox-material.nvim/./assets/light-medium.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Ff4z3r%2Fgruvbox-material.nvim%2F.%2Fassets%2Flight-medium.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

    Different contrasts
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Contrast&lt;/th&gt;
&lt;th&gt;Dark&lt;/th&gt;
&lt;th&gt;Light&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Hard&lt;/td&gt;
&lt;td&gt;&lt;a rel="noopener noreferrer" href="https://github.com/f4z3r/gruvbox-material.nvim/./assets/dark-hard.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Ff4z3r%2Fgruvbox-material.nvim%2F.%2Fassets%2Fdark-hard.png" alt=""&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a rel="noopener noreferrer" href="https://github.com/f4z3r/gruvbox-material.nvim/./assets/light-hard.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Ff4z3r%2Fgruvbox-material.nvim%2F.%2Fassets%2Flight-hard.png" alt=""&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;&lt;a rel="noopener noreferrer" href="https://github.com/f4z3r/gruvbox-material.nvim/./assets/dark-medium.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Ff4z3r%2Fgruvbox-material.nvim%2F.%2Fassets%2Fdark-medium.png" alt=""&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a rel="noopener noreferrer" href="https://github.com/f4z3r/gruvbox-material.nvim/./assets/light-medium.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Ff4z3r%2Fgruvbox-material.nvim%2F.%2Fassets%2Flight-medium.png" alt=""&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Soft&lt;/td&gt;
&lt;td&gt;&lt;a rel="noopener noreferrer" href="https://github.com/f4z3r/gruvbox-material.nvim/./assets/dark-soft.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Ff4z3r%2Fgruvbox-material.nvim%2F.%2Fassets%2Fdark-soft.png" alt=""&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a rel="noopener noreferrer" href="https://github.com/f4z3r/gruvbox-material.nvim/./assets/light-soft.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Ff4z3r%2Fgruvbox-material.nvim%2F.%2Fassets%2Flight-soft.png" alt=""&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;


&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Features&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Supported Plugins:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/nvim-treesitter/nvim-treesitter" rel="noopener noreferrer"&gt;Treesitter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/nvim-telescope/telescope.nvim" rel="noopener noreferrer"&gt;Telescope&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://neovim.io/doc/user/lsp.html" rel="nofollow noopener noreferrer"&gt;LSP Diagnostics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kyazdani42/nvim-tree.lua" rel="noopener noreferrer"&gt;Nvim Tree&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/preservim/nerdtree" rel="noopener noreferrer"&gt;NERDTree&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/mhinz/vim-startify" rel="noopener noreferrer"&gt;Startify&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/airblade/vim-gitgutter" rel="noopener noreferrer"&gt;vim-gitgutter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/mbbill/undotree" rel="noopener noreferrer"&gt;undotree&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/liuchengxu/vista.vim" rel="noopener noreferrer"&gt;Vista.vim&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/phaazon/hop.nvim" rel="noopener noreferrer"&gt;Hop&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/liuchengxu/vim-which-key" rel="noopener noreferrer"&gt;WhichKey&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Yggdroot/indentLine" rel="noopener noreferrer"&gt;indentLine&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/lukas-reineke/indent-blankline.nvim" rel="noopener noreferrer"&gt;Indent Blankline&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/rcarriga/nvim-notify" rel="noopener noreferrer"&gt;nvim-notify&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/RRethy/vim-illuminate" rel="noopener noreferrer"&gt;vim-illuminate&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/hrsh7th/nvim-cmp" rel="noopener noreferrer"&gt;nvim-cmp&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/nvim-neorg/neorg" rel="noopener noreferrer"&gt;neorg&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/lukas-reineke/headlines.nvim/" rel="noopener noreferrer"&gt;headlines.nvim&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/nvim-lualine/lualine.nvim/tree/master" rel="noopener noreferrer"&gt;lualine&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;and many more ...&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Please feel free to open an issue if you want some features or other plugins to be included.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;…&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/f4z3r/gruvbox-material.nvim" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;



&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/f4z3r" rel="noopener noreferrer"&gt;
        f4z3r
      &lt;/a&gt; / &lt;a href="https://github.com/f4z3r/sofa" rel="noopener noreferrer"&gt;
        sofa
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A command execution engine powered by rofi.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div&gt;
&lt;a rel="noopener noreferrer" href="https://github.com/f4z3r/sofa/./assets/logo.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Ff4z3r%2Fsofa%2F.%2Fassets%2Flogo.png" alt="Sofa" width="35%"&gt;&lt;/a&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Sofa&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/53bf0c03371f1fadda6fe1189a8aa602a39aff60609ed2572c7c4cb35c410169/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f6c6963656e73652f66347a33722f736f66613f6c696e6b3d68747470732533412532462532466769746875622e636f6d25324666347a3372253246736f6661253246626c6f622532466d61696e2532464c4943454e5345"&gt;&lt;img src="https://camo.githubusercontent.com/53bf0c03371f1fadda6fe1189a8aa602a39aff60609ed2572c7c4cb35c410169/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f6c6963656e73652f66347a33722f736f66613f6c696e6b3d68747470732533412532462532466769746875622e636f6d25324666347a3372253246736f6661253246626c6f622532466d61696e2532464c4943454e5345" alt="GitHub License"&gt;&lt;/a&gt;
&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/61c8296308ca8f5d73cb187c82008aa09dc5ac3edb5fa8c42d45187e2b974ea1/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f762f72656c656173652f66347a33722f736f66613f6c6f676f3d676974687562266c696e6b3d68747470732533412532462532466769746875622e636f6d25324666347a3372253246736f666125324672656c6561736573"&gt;&lt;img src="https://camo.githubusercontent.com/61c8296308ca8f5d73cb187c82008aa09dc5ac3edb5fa8c42d45187e2b974ea1/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f762f72656c656173652f66347a33722f736f66613f6c6f676f3d676974687562266c696e6b3d68747470732533412532462532466769746875622e636f6d25324666347a3372253246736f666125324672656c6561736573" alt="GitHub Release"&gt;&lt;/a&gt;
&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/8cf94ab4ef3a06a8418879f24d914bbf59f0f2af92d9ff45be926cb4ff4ef543/68747470733a2f2f696d672e736869656c64732e696f2f6c7561726f636b732f762f66347a33722f736f66613f6c6f676f3d6c7561266c696e6b3d68747470732533412532462532466c7561726f636b732e6f72672532466d6f64756c657325324666347a3372253246736f6661"&gt;&lt;img src="https://camo.githubusercontent.com/8cf94ab4ef3a06a8418879f24d914bbf59f0f2af92d9ff45be926cb4ff4ef543/68747470733a2f2f696d672e736869656c64732e696f2f6c7561726f636b732f762f66347a33722f736f66613f6c6f676f3d6c7561266c696e6b3d68747470732533412532462532466c7561726f636b732e6f72672532466d6f64756c657325324666347a3372253246736f6661" alt="LuaRocks"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;A command execution engine powered by &lt;a href="https://github.com/davatorium/rofi" rel="noopener noreferrer"&gt;&lt;code&gt;rofi&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://github.com/junegunn/fzf" rel="noopener noreferrer"&gt;&lt;code&gt;fzf&lt;/code&gt;&lt;/a&gt;.&lt;/h3&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="https://github.com/f4z3r/sofa#about" rel="noopener noreferrer"&gt;About&lt;/a&gt; |
&lt;a href="https://github.com/f4z3r/sofa#examples" rel="noopener noreferrer"&gt;Examples&lt;/a&gt; |
&lt;a href="https://github.com/f4z3r/sofa#installation" rel="noopener noreferrer"&gt;Installation&lt;/a&gt; |
&lt;a href="https://github.com/f4z3r/sofa#integration" rel="noopener noreferrer"&gt;Integration&lt;/a&gt; |
&lt;a href="https://github.com/f4z3r/sofa#configuration" rel="noopener noreferrer"&gt;Configuration&lt;/a&gt; |
&lt;a href="https://github.com/f4z3r/sofa#development" rel="noopener noreferrer"&gt;Development&lt;/a&gt; |
&lt;a href="https://github.com/f4z3r/sofa#roadmap" rel="noopener noreferrer"&gt;Roadmap&lt;/a&gt; |
&lt;a href="https://github.com/f4z3r/sofa#license" rel="noopener noreferrer"&gt;License&lt;/a&gt;&lt;/p&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;About&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;&lt;code&gt;sofa&lt;/code&gt; is a small utility to enable easy execution of templated commands. It can be used to store
snippets that you often rely on, or fully template complex commands. It is meant to be used with a
shortcut manager to enable launching from anywhere, but can also inject commands into your current
shell session for commands that make more sense to run there (see &lt;a href="https://github.com/f4z3r/sofa#integration" rel="noopener noreferrer"&gt;Integration&lt;/a&gt;).&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Examples&lt;/h2&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;For Snippets Management&lt;/h3&gt;

&lt;/div&gt;
&lt;p&gt;You can use &lt;code&gt;sofa&lt;/code&gt; for standard snippets management. Use the &lt;a href="https://github.com/f4z3r/sofa#integration" rel="noopener noreferrer"&gt;integration&lt;/a&gt; described
below, and have configuration such as:&lt;/p&gt;

Configuration
&lt;div class="highlight highlight-source-yaml notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-ent"&gt;namespaces&lt;/span&gt;
  &lt;span class="pl-ent"&gt;lua&lt;/span&gt;:
    &lt;span class="pl-ent"&gt;commands&lt;/span&gt;:
      &lt;span class="pl-ent"&gt;install-local&lt;/span&gt;:
        &lt;span class="pl-ent"&gt;command&lt;/span&gt;: &lt;span class="pl-s"&gt;luarocks --local make --deps-mode {{ deps_mode }} {{ rockspec }}&lt;/span&gt;
        &lt;span class="pl-ent"&gt;description&lt;/span&gt;: &lt;span class="pl-s"&gt;Install rock locally&lt;/span&gt;
        &lt;span class="pl-ent"&gt;tags&lt;/span&gt;:
        - &lt;span class="pl-s"&gt;local&lt;/span&gt;
        - &lt;span class="pl-s"&gt;luarocks&lt;/span&gt;
        &lt;span class="pl-ent"&gt;interactive&lt;/span&gt;: &lt;span class="pl-c1"&gt;true&lt;/span&gt;&lt;/pre&gt;…
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/f4z3r/sofa" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


</description>
      <category>ai</category>
      <category>devops</category>
      <category>devex</category>
      <category>programming</category>
    </item>
    <item>
      <title>A Comprehensive Guide to Managing Large Scale Infrastructure with GitOps</title>
      <dc:creator>Jakob Beckmann</dc:creator>
      <pubDate>Tue, 08 Apr 2025 04:31:02 +0000</pubDate>
      <link>https://forem.com/ipt/a-comprehensive-guide-to-managing-large-scale-infrastructure-with-gitops-460c</link>
      <guid>https://forem.com/ipt/a-comprehensive-guide-to-managing-large-scale-infrastructure-with-gitops-460c</guid>
      <description>&lt;p&gt;GitOps is getting adopted more and more. However, there still seems to be some confusion as to what GitOps is, how it differs from regular CI/CD pipelines, and how to best adopt it. In this post we&lt;br&gt;
will quickly cover what GitOps is, and the three main lessons learned from using GitOps to manage infrastructure at scale both on premise and in the cloud.&lt;/p&gt;
&lt;h2&gt;
  
  
  GitOps Overview
&lt;/h2&gt;

&lt;p&gt;GitOps is a set of principles enabling the operation of a system via version controlled, declarative configuration. More specifically, the &lt;a href="https://opengitops.dev/" rel="noopener noreferrer"&gt;OpenGitOps&lt;/a&gt; project defines four principles which define whether a system or set of systems is managed via GitOps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Declarative: A system managed by GitOps must have its desired state expressed declaratively.&lt;/li&gt;
&lt;li&gt;Versioned and Immutable: Desired state is stored in a way that enforces immutability, versioning and retains a complete version history.&lt;/li&gt;
&lt;li&gt;Pulled Automatically: Software agents automatically pull the desired state declarations from the source.&lt;/li&gt;
&lt;li&gt;Continuously Reconciled: Software agents continuously observe actual system state and attempt to apply the desired state.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note that &lt;code&gt;git&lt;/code&gt; is not referenced anywhere, as GitOps is not bound to any tooling. However, in layman terms, many consider a system operated via &lt;code&gt;git&lt;/code&gt; to be a GitOps system. This is not quite correct.&lt;/p&gt;
&lt;h2&gt;
  
  
  GitOps is More than CI/CD Pipelines
&lt;/h2&gt;

&lt;p&gt;Taking the "layman's definition" from above, any system that has CI/CD via pipelines triggered on repository changes would be a GitOps system. This is not accurate. Consider an IaC pipeline which applies declaratively defined infrastructure (such as a standard &lt;code&gt;opentofu apply&lt;/code&gt; in a pipeline, or a Docker build followed by a &lt;code&gt;kubectl apply&lt;/code&gt;). While such a system adheres to the first two principles, it does not adhere to the latter two. This implies that changes made to the target system are not corrected (reconciled) until the pipeline runs the next time. Similarly, if the pipeline fails for whatever reason, the desired state does not change the pipeline: a configuration drift is not detected, even if not reconciled.&lt;/p&gt;

&lt;p&gt;This is an important distinction when considering "standard CI/CD" and GitOps. Simply having something declared as code does not make it GitOps.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Advantages of GitOps
&lt;/h2&gt;

&lt;p&gt;GitOps has many advantages over standard ways of managing systems. The advantages of having a declarative desired state, version controlling it, and interacting with the system only via &lt;code&gt;git&lt;/code&gt; (or whatever version control system you use) are tremendous. From improved security and higher efficiency to better change visibility. These are well known to most people and will thus not be covered here.&lt;/p&gt;

&lt;p&gt;Drift detection and automatic reconciliation are the two other aspects that make GitOps absolutely amazing. This is especially true in the current day and age, with the proliferation of complex systems being worked on by many people concurrently. Being able to observe that the system is not in the desired state has massive advantages, such as for standard SRE operations. Continuous reconciliation ensures that manual operational tasks are kept to a minimum, and that systems cannot degrade over time as small undesired changes creep in.&lt;/p&gt;
&lt;h2&gt;
  
  
  Tooling
&lt;/h2&gt;

&lt;p&gt;In this post we will mostly focus on using GitOps to manage resources handled via the Kubernetes API, but it should be noted that GitOps as a concept is in no way restricted to Kubernetes. In the Kubernetes space there are two major players for GitOps: &lt;a href="https://argoproj.github.io/cd/" rel="noopener noreferrer"&gt;ArgoCD&lt;/a&gt; and &lt;a href="https://fluxcd.io/" rel="noopener noreferrer"&gt;FluxCD&lt;/a&gt;. We will not go into the details as to what the advantages for each tool are, other than saying that according to our own experience, ArgoCD might be more developer focused, while FluxCD might suit platform engineers with more Kubernetes experience that want more flexibility.&lt;/p&gt;

&lt;p&gt;The rest of this post is tool agnostic and everything we are talking about can be done with either tool (but some aspects might be easier to do with one or the other).&lt;/p&gt;
&lt;h2&gt;
  
  
  Infrastructure: Disambiguation
&lt;/h2&gt;

&lt;p&gt;Before we dive into how to structure your GitOps configuration, it might make sense to draw a line as to where infrastructure starts and where it ends. We consider infrastructure everything that is part of the platform provided to an application team. Hence this line might vary depending on the maturity of the platform you provide your teams. If we consider a simple Kubernetes platform with little additional abstraction for its users, the infrastructure would contain the Kubernetes platform itself as well as all system components that are shared between the teams, such as a central monitoring stack, a central credential management solution, centralized policy enforcement of specific Kubernetes resources, and the like.&lt;/p&gt;

&lt;p&gt;The lower end of the spectrum will likely not be managed by GitOps. That is simply because the GitOps tooling itself typically needs to run somewhere, and also needs to be bootstrapped somehow. Some tools such as FluxCD allow the GitOps controller to manage itself, but even in these cases the runtime for the controller needs to exist when the controller is initially installed, and is thus typically not part of the GitOps configuration.&lt;/p&gt;

&lt;p&gt;Now that this is cleared up, let us consider how the configuration should be managed.&lt;/p&gt;
&lt;h2&gt;
  
  
  App-of-Apps
&lt;/h2&gt;

&lt;p&gt;A very popular pattern for managing configuration via GitOps is the "app-of-apps" pattern. This was popularized by ArgoCD, but is also applicable to other tooling. We will use ArgoCD in the example below, but the same can be implemented using FluxCD Kustomizations.&lt;/p&gt;

&lt;p&gt;Let us consider a component from our infrastructure that we want to manage via GitOps. Typically, we would need to tell the GitOps controller how to manage this component. For instance, let us assume the component is installed via raw Kubernetes manifests. Then we would tell the GitOps controller which repository contains these manifests and in which namespace to install them. Depending on the controller you are using, you might also configure additional parameters such as how often it should be reconciled, whether it depends on other components, and so on. In ArgoCD jargon this would be an "Application" (the root of "app-of-apps" naming), and would look as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sealed-secrets&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sealed-secrets&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://bitnami-labs.github.io/sealed-secrets&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.16.1&lt;/span&gt;
    &lt;span class="na"&gt;helm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;releaseName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sealed-secrets&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://kubernetes.default.svc"&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubeseal&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You would then apply this &lt;code&gt;Application&lt;/code&gt; resource to Kubernetes. Your component would then be managed by GitOps, as any changes you push to the manifests repository would be reflected on the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Then a second infrastructure component needs to be installed, and you repeat the process. The result would be a second &lt;code&gt;Application&lt;/code&gt; which installs and manages a component. You might also want to version your deployment (such as using version &lt;code&gt;1.16.1&lt;/code&gt; of the Helm chart). This implies that lifecycles require a change to this &lt;code&gt;Application&lt;/code&gt; manifest, and thus a call against the Kubernetes API to edit it.&lt;/p&gt;

&lt;p&gt;The end result is a set of &lt;code&gt;Application&lt;/code&gt; resources, some of which you periodically modify when lifecycling a component. Now imagine you need to deploy your infrastructure elsewhere (for instance a second Kubernetes cluster in our example), or maybe even a couple dozen times. Then you need to manage this entire set of &lt;code&gt;Application&lt;/code&gt; resources on every platform. A better approach is to add an abstraction layer, which itself deploys the &lt;code&gt;Application&lt;/code&gt; resources via GitOps. Hence you put all your &lt;code&gt;Application&lt;/code&gt; resources into a repository, and define another, "higher level" &lt;code&gt;Application&lt;/code&gt; which deploys this repository. This means that when deploying to new platforms, you only need to deploy that one "higher level" &lt;code&gt;Application&lt;/code&gt;, and any changes to the component &lt;code&gt;Application&lt;/code&gt; resources can be made via Git, conforming to our GitOps approach. This "higher level" &lt;code&gt;Application&lt;/code&gt; is only there to deploy the component &lt;code&gt;Application&lt;/code&gt;s thus the name "app-of-apps". Visually, you thus have the following structure:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9403giq52kn3ptnxfmlo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9403giq52kn3ptnxfmlo.png" alt="Visual representation of app-of-apps pattern"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It should be noted that this also massively helps when customizing platforms. Typically, components cannot be deployed truly one-to-one in several places, but require slight configuration differences. Consider for instance hostnames for UIs of your components. Two of these components deployed in different locations cannot share the same hostname and routing. Using an "app-of-apps" approach allows you to define variables on the top level application, and inject these into the downstream applications such that they can slightly adapt the way they are installed. We will not dive deeper into how this is done as it is highly dependent on the tooling you use (ArgoCD uses &lt;code&gt;ApplicationSet&lt;/code&gt;, FluxCD uses variable substitution), but know this is enabled by such an approach.&lt;/p&gt;
&lt;h2&gt;
  
  
  Consolidating your Configuration
&lt;/h2&gt;

&lt;p&gt;In the organisation I first used GitOps at scale, we deployed all our components as Helm charts to a Kubernetes cluster. Each component was essentially contained within two different repositories in our version control system:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;the source code repository which typically built a Docker image as an artefact&lt;/li&gt;
&lt;li&gt;the Helm chart definition which referenced the Docker image from above&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When we then introduced GitOps, we decided to add a third repository containing the exact deployment definition (in our case the &lt;code&gt;Application&lt;/code&gt; declarations) for the component. Using the app-of-apps pattern from above, we could then reference each of these "GitOps repositories" and deploy specific overlays (customizations) of the &lt;code&gt;Application&lt;/code&gt; to specific platforms. This worked well for quite some time. However, with time the number of components we managed increased, and so did the number of target platforms to which these components needed to be deployed. This lead to quite a few issues.&lt;/p&gt;

&lt;p&gt;When a new target platform was introduced, all such "GitOps repositories" needed to be updated to contain a new overlay customizing the &lt;code&gt;Application&lt;/code&gt; to the specific platform. This is very tedious when you have several dozen such repositories.&lt;/p&gt;

&lt;p&gt;Moreover, components had dependencies to other components. This meant that we were referencing components within a repository that were defined in another repository. While not problematic in itself, this can become very tricky when one component has a dependency on a configuration value of another component. The configuration value is then duplicated in both repositories and becomes difficult to maintain. While this sounds like we did not properly separate the components, it is very common to see such cases in infrastructure configurations. Consider for instance a deployment of an ingress controller which defines a hostname suffix for its routes. All components deployed on the same Kubernetes platform that deploy a route/ingress will need to use exactly that hostname suffix in order to have valid routing.&lt;/p&gt;

&lt;p&gt;The above issue also results in tricky situations when configurations need to be changed for components that are dependent on one another. If the deployment configuration is separated into different repositories, PRs to these repositories need to be synchronized to ensure the deployment occurs at the same time.&lt;/p&gt;

&lt;p&gt;Finally, distributing the deployment configuration over so many repositories meant that it became increasingly difficult to have an overview of what is deployed on a target platform. One would need to navigate through dozens of repositories to check this is correctly done.&lt;/p&gt;

&lt;p&gt;After identifying these issues we decided to move all our configuration into a single repository. This repository would then contain a templated definition of the entire set of components which would need to be deployed. A set of platform definitions within the same repository would then feed values to templates to ensure consistent configuration. This massively helped us with to address the issues mentioned above. On top of that, it allows to version the "template" and thus enables rollouts of a versioned infrastructure layer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwt8td49qwxva9p5fnj5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwt8td49qwxva9p5fnj5w.png" alt="Grouping Application declarations into a single repository"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can find an example repository of such a structure&lt;br&gt;
designed with FluxCD here:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/f4z3r" rel="noopener noreferrer"&gt;
        f4z3r
      &lt;/a&gt; / &lt;a href="https://github.com/f4z3r/flux-demo" rel="noopener noreferrer"&gt;
        flux-demo
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      This repository shows an example how one can use a single mono-repository to manage multiple clusters' infrastructure in a controlled fashion.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div&gt;
&lt;a rel="noopener noreferrer" href="https://github.com/f4z3r/flux-demo/./assets/logo.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Ff4z3r%2Fflux-demo%2F.%2Fassets%2Flogo.png" alt="FluxCD" width="25%"&gt;&lt;/a&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Flux Demo&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/d49b85fee643a77a1b3e35707a223e5be358cd0ece3c999f218d7e15485d3531/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f6c6173742d636f6d6d69742f66347a33722f666c75782d64656d6f"&gt;&lt;img src="https://camo.githubusercontent.com/d49b85fee643a77a1b3e35707a223e5be358cd0ece3c999f218d7e15485d3531/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f6c6173742d636f6d6d69742f66347a33722f666c75782d64656d6f" alt="GitHub last commit"&gt;&lt;/a&gt;
&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/b726528050260dd1b632df604eb56c379d272e973d0007e2595dc75dd83d002d/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f6c6963656e73652f66347a33722f666c75782d64656d6f"&gt;&lt;img src="https://camo.githubusercontent.com/b726528050260dd1b632df604eb56c379d272e973d0007e2595dc75dd83d002d/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f6c6963656e73652f66347a33722f666c75782d64656d6f" alt="GitHub License"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;An example how one can use a mono-repo to manage large infrastructure in a controlled fashion using FluxCD.&lt;/h3&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="https://github.com/f4z3r/flux-demo#setup" rel="noopener noreferrer"&gt;Setup&lt;/a&gt; |
&lt;a href="https://github.com/f4z3r/flux-demo#structure-of-the-repo" rel="noopener noreferrer"&gt;Structure of the Repo&lt;/a&gt; |
&lt;a href="https://github.com/f4z3r/flux-demo#application-vs-infrastructure" rel="noopener noreferrer"&gt;Application vs Infrastructure&lt;/a&gt; |
&lt;a href="https://github.com/f4z3r/flux-demo#workflow" rel="noopener noreferrer"&gt;Workflow&lt;/a&gt;&lt;/p&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Setup&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Generate a GitHub PAT&lt;/h3&gt;

&lt;/div&gt;
&lt;p&gt;See the documentation: &lt;a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens" rel="noopener noreferrer"&gt;https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;For fined-grained control, grant the token Admin and content read/write permissions on the
repository.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Setup a Cluster with Flux&lt;/h3&gt;

&lt;/div&gt;
&lt;p&gt;Setup the required tooling with &lt;code&gt;devbox shell&lt;/code&gt;, then&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; install a cluster&lt;/span&gt;
kind create cluster -n demo
&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; set the token&lt;/span&gt;
&lt;span class="pl-k"&gt;export&lt;/span&gt; GITHUB_TOKEN=&lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;&amp;lt;redacted&amp;gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;
&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; onboard flux&lt;/span&gt;
flux bootstrap github \
  --token-auth \
  --owner=f4z3r \
  --repository=flux-demo \
  --branch=main \
  --path=clusters/demo \
  --personal&lt;/pre&gt;

&lt;/div&gt;

Sample output from Flux installation
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;pre class="notranslate"&gt;&lt;code&gt;► connecting to github.com
► cloning branch "main" from Git repository "https://github.com/f4z3r/flux-demo.git"
✔ cloned repository
► generating component manifests
✔ generated component manifests
✔ committed component manifests to "main" ("158753158f3c760f741f22ed7f68bdee1b66e475")
► pushing component manifests to "https://github.com/f4z3r/flux-demo.git"
► installing components in&lt;/code&gt;&lt;/pre&gt;…&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/f4z3r/flux-demo" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  Gitops Bridge
&lt;/h2&gt;

&lt;p&gt;The last challenge we want to address in this blog post is a concept called a "GitOps bridge". In public cloud environments, there is typically a relatively strong cut between infrastructure deployed via Terraform (or any similar tool), and the infrastructure deployed via GitOps. For instance, one might deploy an Azure Kubernetes Service and some surrounding services (such as the required network, a container registry, etc) via Terraform, and them deploy components and applications within the AKS using GitOps. The issue that we face here is that the GitOps configuration very often depends on the Terraform configuration. Consider for instance the container registry. Its address is set up by Terraform, but is used in every image declaration in the GitOps configuration. One option is to duplicate such values in the respective configurations, while another option is to use a GitOps bridge.&lt;/p&gt;

&lt;p&gt;The GitOps bridge is an abstract concept on how to pass configuration values from tooling such as Terraform as inputs to the GitOps configuration. How this is done in practice very much depends on which tools you use. For instance, if looking at Terraform and FluxCD, a common way to achieve this is to have Terraform write a ConfigMap onto the AKS where the FluxCD controller will run containing all variables (and their values) that will be required by the GitOps configuration. The FluxCD controller then supports injecting variables from a ConfigMap via &lt;a href="https://fluxcd.io/flux/components/kustomize/kustomizations/#post-build-variable-substitution" rel="noopener noreferrer"&gt;variable substitution&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Using a GitOps bridge has the advantage that changes in the Terraform configurations are much less likely to break the GitOps configuration that builds on top of it. Moreover, it allows Terraform to directly bootstrap the entire GitOps setup when creating new platforms without the need to manually redefine the required variables in the GitOps repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;So, to recap, we have looked at what GitOps really is (and isn't). Understanding these basics is critical to correctly implement GitOps in your projects. On top of that, we looked at three best practices:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use an app-of-apps pattern to improve resiliency for when you need to recreate platforms.&lt;/li&gt;
&lt;li&gt;Consider using a mono-repository for all your GitOps configuration as your setup grows.&lt;/li&gt;
&lt;li&gt;Have a look at GitOps bridges to improve the automation when setting up platforms and ensuring your Terraform and GitOps configurations are consistent.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I hope this has helped you understand a bit better how to use GitOps at scale. If you have any questions or comments, feel free to let me know below.&lt;/p&gt;

</description>
      <category>sre</category>
      <category>platformengineering</category>
      <category>gitops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How to Join Lists in Kafka Streams Applications</title>
      <dc:creator>Jan Kleine</dc:creator>
      <pubDate>Mon, 17 Mar 2025 06:00:00 +0000</pubDate>
      <link>https://forem.com/ipt/how-to-join-lists-in-kafka-streams-applications-1h28</link>
      <guid>https://forem.com/ipt/how-to-join-lists-in-kafka-streams-applications-1h28</guid>
      <description>&lt;p&gt;I recently joined a project where we do data processing with Kafka Streams applications and came across an interesting problem: "list joins". There was already a working solution to the problem, but I was not quite satisfied, so I dug a little deeper. Since I didn't find much on the topic online, I wanted to share my findings here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt; The final topology is discussed here and you can find the code here:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/iptch" rel="noopener noreferrer"&gt;
        iptch
      &lt;/a&gt; / &lt;a href="https://github.com/iptch/kafka-list-join-demo" rel="noopener noreferrer"&gt;
        kafka-list-join-demo
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A demo showing how to do a list join in a kafka streams application.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Kafka List Join Demo&lt;/h1&gt;

&lt;/div&gt;
&lt;p&gt;This project shows two ways to perform a list join in a Kafka streams application. A list join refers to joining a
record that contains a list with a KTable, such that each element in the list gets joined with the corresponding element
in the KTable.&lt;/p&gt;
&lt;p&gt;This image shows the high level idea, joining a persons address list, with the corresponding addresses:&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/iptch/kafka-list-join-demo/ListJoin.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fiptch%2Fkafka-list-join-demo%2FListJoin.png" alt="List join overview"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Check out &lt;a href="https://dev.to/ipt/how-to-join-lists-in-kafka-streams-applications-1h28" rel="nofollow"&gt;this blog post&lt;/a&gt; for a discussion of
the approaches.&lt;/p&gt;
&lt;p&gt;The tests should cover all relevant cases of message ordering and updates.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Building&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;To build and test the project, run&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;./gradlew clean build&lt;/pre&gt;

&lt;/div&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/iptch/kafka-list-join-demo" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;p&gt;I assume you have some understanding of Kafka Streams, but I'll try to link to relevant documentation and resources where possible. It's worth checking out the &lt;a href="https://kafka.apache.org/documentation/streams/developer-guide/dsl-api.html" rel="noopener noreferrer"&gt;Kafka Streams DSL Developer Guide&lt;/a&gt; if you are unfamiliar with Kafka Streams.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;We have a topic with records (of type &lt;code&gt;Outer&lt;/code&gt;, e.g., a person) that contain (among other things) a list. The elements in the list reference records on another topic (of type &lt;code&gt;Inner&lt;/code&gt;, e.g., addresses). For every element in the outer topic we want to look up the corresponding record in our inner topic, and merge them to enhance our list.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zbrya8uumwcgsy375da.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zbrya8uumwcgsy375da.png" alt="A topic of persons with multiple address IDs, an address topic. After the list join the person is augmented with full addresses." width="781" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Additional Considerations
&lt;/h3&gt;

&lt;p&gt;There are a few additional constraints we need to keep in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Eventual Consistency:&lt;/strong&gt; The corresponding records on the inner topic may not be available right away, that is, records may come in after our outer record.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Updates:&lt;/strong&gt; Our outer record may be updated over time. This may include updates (additions/removals) of inner list elements, but also updates to fields other than the list.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duplicates:&lt;/strong&gt; In our specific use case we don't need duplicate list entries. However, all approaches discussed here can be modified to allow duplicates without too much effort.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Existing Solution
&lt;/h2&gt;

&lt;p&gt;The existing solution to the problem looks roughly like follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We set a timestamp on every record on our outer stream (needed later).&lt;/li&gt;
&lt;li&gt;We flat map our outer records so we have exactly one inner list element in every flat mapped record, to differentiate the flat records we change the keys to composite keys (of the form &lt;code&gt;&amp;lt;outer-key&amp;gt;$$&amp;lt;inner-key&amp;gt;&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;We interpret the resulting stream as a KTable and perform a foreign key left join with the other KTable. The &lt;a href="https://kafka.apache.org/38/documentation/streams/developer-guide/dsl-api.html#ktable-ktable-fk-join" rel="noopener noreferrer"&gt;KTable-KTable left join&lt;/a&gt; (as opposed to, e.g., a &lt;a href="https://kafka.apache.org/documentation/streams/developer-guide/dsl-api.html#kstream-ktable-join" rel="noopener noreferrer"&gt;KStream-KTable join&lt;/a&gt;) is needed to fulfill our eventual consistency constrain.&lt;/li&gt;
&lt;li&gt;We group the the resulting records by the first part of the composite key.&lt;/li&gt;
&lt;li&gt;Finally, we reduce the records in each group by appending the lists, but we only consider the newest timestamp we see. Whenever we see a newer timestamp we discard all older records. This ensures that we do not add stale elements to our list that may be deleted in the current list.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The code looks similar to the below, though I left some smaller details out, such as how we forward tombstones. The full code can be found &lt;a href="https://github.com/iptch/kafka-list-join-demo/blob/main/src/main/java/ch/ipt/jkl/listjoindemo/timestamp/TimestampListJoinTopology.java" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;outerKStream&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;innerKTable&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;outerKStream&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;mapValues&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;outerMapper&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;flatMap&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;outerFlatMapper&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toTable&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buildStore&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;outerSerde&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"listJoinFlatStore"&lt;/span&gt;&lt;span class="o"&gt;))&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;leftJoin&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;innerKTable&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;outer&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;outer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getInnerCount&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;outer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getInner&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;getId&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
            &lt;span class="n"&gt;outerInnerJoiner&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;buildStore&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;outerSerde&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"listJoinJoinerStore"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toStream&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;groupBy&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
            &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;split&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"\\$\\$"&lt;/span&gt;&lt;span class="o"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;],&lt;/span&gt;
            &lt;span class="nc"&gt;Grouped&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Serdes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;String&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;outerSerde&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;reduce&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;outerReducer&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;buildStore&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;outerSerde&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"listJoinReducerStore"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toStream&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Issues
&lt;/h3&gt;

&lt;p&gt;While this approach works for our use case, I don't like it for two reasons:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It feels hacky:&lt;/strong&gt; The timestamps feel a bit hacky and force us to change our data model just for this operation. While we can (and in production do) clear the timestamps after the list join is complete, we are using protobufs for our records. So the timestamp is part of the protobuf definition, even if downstream services don't need it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It is inefficient:&lt;/strong&gt; Whenever the original protobuf changes (be it, a change to the list, or any other field), &lt;em&gt;all&lt;/em&gt; list elements go through the join again, followed by reducing them all (including stale old records) again. That just feels wasteful.&lt;/p&gt;

&lt;p&gt;These issues prompted me to spend some time on the problem and see if I can come up with something better.&lt;/p&gt;
&lt;h2&gt;
  
  
  Improvements
&lt;/h2&gt;

&lt;p&gt;We want to reduce the number of join and reduce operations and remove the need for a dedicated timestamp field.&lt;/p&gt;

&lt;p&gt;For this we need a component that keeps track of the changes in the list of a record so that we only process relevant changes.&lt;/p&gt;
&lt;h3&gt;
  
  
  The List Join Pre-Processor
&lt;/h3&gt;

&lt;p&gt;The only way I found to accomplish this, is to write a custom Processor using the &lt;a href="https://kafka.apache.org/documentation/streams/developer-guide/processor-api.html" rel="noopener noreferrer"&gt;Processor API&lt;/a&gt; provided by Kafka Streams.&lt;/p&gt;

&lt;p&gt;The list join pre-processor performs the task of the flat-map operation from before, but maintains internal state&lt;sup id="fnref1"&gt;1&lt;/sup&gt; to remember what list elements are currently in each record. When a record gets updated, it compares the previous and new list, and (1) only issues flat mapped records for new list elements and (2) issues tombstones for elements removed from the list.&lt;/p&gt;

&lt;p&gt;(1) ensures that we only have to join new list elements, reducing duplicate work. (2) allows us to more efficiently remove old list elements, so we don't need to rely on a hacky timestamp.&lt;/p&gt;

&lt;p&gt;The pre-processor also always forwards a copy of the current record with an empty list. We need this later to correctly propagate changes to other fields that are not the list.&lt;/p&gt;

&lt;p&gt;The logic of the pre-processor, which can also be found &lt;a href="https://github.com/iptch/kafka-list-join-demo/blob/7edae940d4a51003d74082d29e171798f1394a3a/src/main/java/ch/ipt/jkl/listjoindemo/current/operator/PreProcessorSupplier.java#L100-L140" rel="noopener noreferrer"&gt;here&lt;/a&gt;, looks as follows:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;process&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Record&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;TOuter&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="c1"&gt;// in case of tombstone or empty list, this map is empty&lt;/span&gt;
    &lt;span class="nc"&gt;Map&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;TOuter&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;flatValuesMap&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;flatMapper&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;apply&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="o"&gt;()).&lt;/span&gt;&lt;span class="na"&gt;stream&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;collect&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Collectors&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toMap&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
                    &lt;span class="n"&gt;innerIdStringExtractor&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                    &lt;span class="nc"&gt;Function&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;identity&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
                    &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;newValue&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;newValue&lt;/span&gt;
            &lt;span class="o"&gt;));&lt;/span&gt;

    &lt;span class="nc"&gt;Set&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;newIds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;flatValuesMap&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;keySet&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

    &lt;span class="nc"&gt;Set&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;oldIds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Optional&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ofNullable&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;listStore&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="o"&gt;()))&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;map&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;Set:&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="n"&gt;copyOf&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;orElse&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Set&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;of&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

    &lt;span class="c1"&gt;// if both new and old ids are empty we don't need to do anything and can short circuit&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;newIds&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;isEmpty&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;oldIds&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;isEmpty&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// forward flat mapped records for new list elements&lt;/span&gt;
    &lt;span class="nc"&gt;Sets&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;SetView&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;addedIds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Sets&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;difference&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;newIds&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;oldIds&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;newId&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;addedIds&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;forward&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;newId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;flatValuesMap&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;newId&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// send tombstones for removed list elements&lt;/span&gt;
    &lt;span class="nc"&gt;Sets&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;SetView&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;removedIds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Sets&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;difference&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;oldIds&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;newIds&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;removedId&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;removedIds&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;forward&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;removedId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// if the current list is empty delete the list from the store and send a tombstone&lt;/span&gt;
    &lt;span class="c1"&gt;// for the empty list record, otherwise save the current list and send an empty list&lt;/span&gt;
    &lt;span class="c1"&gt;// record to propagate changes to other fields&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;newIds&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;isEmpty&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;forward&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="n"&gt;listStore&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;put&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;forward&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;listCleaner&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;apply&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="o"&gt;()));&lt;/span&gt;
        &lt;span class="n"&gt;listStore&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;put&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;copyOf&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;newIds&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Better Reduce Operation
&lt;/h3&gt;

&lt;p&gt;Now that the flat-map and the subsequent join are improved, we can further improve the reduce operation.&lt;/p&gt;

&lt;p&gt;Previously, the reduce operation was responsible for filtering out outdated records (based on the timestamp) and aggregating the list of inner elements. This was done on all flat-mapped values (including outdated values) and needed to happen every time anything in the original outer record changed.&lt;/p&gt;

&lt;p&gt;Now, that we can issue tombstones for removed elements, we can transition from a KStream &lt;a href="https://kafka.apache.org/38/javadoc/org/apache/kafka/streams/kstream/KStream.html#groupBy(org.apache.kafka.streams.kstream.KeyValueMapper,org.apache.kafka.streams.kstream.Grouped)" rel="noopener noreferrer"&gt;group-by&lt;/a&gt; and &lt;a href="https://kafka.apache.org/38/javadoc/org/apache/kafka/streams/kstream/KGroupedStream.html#reduce(org.apache.kafka.streams.kstream.Reducer,org.apache.kafka.streams.kstream.Materialized)" rel="noopener noreferrer"&gt;reduce&lt;/a&gt; to a KTable &lt;a href="https://kafka.apache.org/38/javadoc/org/apache/kafka/streams/kstream/KTable.html#groupBy(org.apache.kafka.streams.kstream.KeyValueMapper,org.apache.kafka.streams.kstream.Grouped)" rel="noopener noreferrer"&gt;group-by&lt;/a&gt; and &lt;a href="https://kafka.apache.org/38/javadoc/org/apache/kafka/streams/kstream/KGroupedTable.html#reduce(org.apache.kafka.streams.kstream.Reducer,org.apache.kafka.streams.kstream.Reducer,org.apache.kafka.streams.kstream.Materialized)" rel="noopener noreferrer"&gt;reduce&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The KTable reduce works a bit differently than the KStream reduce. Where the KStream reduce has a single reducer that receives the current aggregate and any new record (be it a tombstone or a normal record), the KTable reduce has an adder reducer and a remover reducer.&lt;/p&gt;

&lt;p&gt;The adder reducer is called when ever an element is added to the KTable and receives the current aggregate and the newly added record.&lt;/p&gt;

&lt;p&gt;The remover reducer is called when ever an element is removed from the KTable (via a tombstone) and is provided with the current aggregate and the removed record. This allows us to remove the inner element from the current aggregate, as we know exactly which one got removed.&lt;/p&gt;

&lt;p&gt;Our adder looks as follows. Remember that our pre-processor also sends a copy of the original record with an empty list. This way we can update other fields as well.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;Outer&lt;/span&gt; &lt;span class="nf"&gt;apply&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Outer&lt;/span&gt; &lt;span class="n"&gt;currentValue&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Outer&lt;/span&gt; &lt;span class="n"&gt;newValue&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;newValue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getInnerList&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;isEmpty&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// if list is empty update all fields other than the list&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;newValue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toBuilder&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addAllInner&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;currentValue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getInnerList&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// otherwise add inner item to current list&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;currentValue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toBuilder&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addAllInner&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;newValue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getInnerList&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The full code can also be found &lt;a href="https://github.com/iptch/kafka-list-join-demo/blob/main/src/main/java/ch/ipt/jkl/listjoindemo/preprocessor/operator/OuterReducerAdder.java" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Our remover only has to worry about removing the list elements. In our case we do it based on the inner ID.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;Outer&lt;/span&gt; &lt;span class="nf"&gt;apply&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Outer&lt;/span&gt; &lt;span class="n"&gt;currentValue&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Outer&lt;/span&gt; &lt;span class="n"&gt;oldValue&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// if oldValue inner list is empty, there is nothing to do&lt;/span&gt;
    &lt;span class="c1"&gt;// the adder handles empty list updates&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;oldValue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getInnerList&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;isEmpty&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;currentValue&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;// remove the single inner value from the current list,&lt;/span&gt;
    &lt;span class="c1"&gt;// in this case we do it by id&lt;/span&gt;
    &lt;span class="nc"&gt;Inner&lt;/span&gt; &lt;span class="n"&gt;innerToRemove&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;oldValue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getInnerList&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;getFirst&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

    &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Inner&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;innerList&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;currentValue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getInnerList&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;stream&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;filter&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;inner&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;inner&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getId&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;equals&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;innerToRemove&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getId&lt;/span&gt;&lt;span class="o"&gt;()))&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toList&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;currentValue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toBuilder&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;clearInner&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addAllInner&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;innerList&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The full code can also be found &lt;a href="https://github.com/iptch/kafka-list-join-demo/blob/main/src/main/java/ch/ipt/jkl/listjoindemo/preprocessor/operator/OuterReducerRemover.java" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  New Topology
&lt;/h2&gt;

&lt;p&gt;Incorporating the two improvements from above, we end up with a topology which looks surprisingly similar to our initial topology.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;outerKStream&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;innerKtable&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;outerKStream&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;process&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;preProcessorSupplier&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toTable&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buildStore&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;outerSerde&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"listJoinFlatStore"&lt;/span&gt;&lt;span class="o"&gt;))&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;leftJoin&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;innerKTable&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="c1"&gt;// if the inner list is empty the foreign key extractor should&lt;/span&gt;
            &lt;span class="c1"&gt;// return null so the outer is joined with null&lt;/span&gt;
            &lt;span class="n"&gt;outer&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;outer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getInnerCount&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;outer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getInner&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;getId&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
            &lt;span class="n"&gt;outerInnerJoiner&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;buildStore&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;outerSerde&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"listJoinJoinerStore"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;groupBy&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
            &lt;span class="c1"&gt;// group by first part of composite key&lt;/span&gt;
            &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nc"&gt;KeyValue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;pair&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;split&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"\\$\\$"&lt;/span&gt;&lt;span class="o"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;],&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
            &lt;span class="nc"&gt;Grouped&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Serdes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;String&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;outerSerde&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;reduce&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;listAdder&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;listRemover&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;buildStore&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;outerSerde&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"listJoinReducerStore"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toStream&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Again, I skipped over some smaller details, like how we forward tombstones. The full topology can be found &lt;a href="https://github.com/iptch/kafka-list-join-demo/blob/main/src/main/java/ch/ipt/jkl/listjoindemo/preprocessor/operator/OuterReducerRemover.java" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Since copying this code to different topologies is a bit cumbersome, we decided to wrap this entire sub-topology in a &lt;code&gt;ListJoin&lt;/code&gt; utility. This allows easier reusing of the code, as the developer only has to provide a couple key components and make the resulting topologies easier to understand.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="c1"&gt;// setup&lt;/span&gt;
&lt;span class="nc"&gt;ListJoin&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;...&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;listJoin&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ListJoin&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;builder&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
        &lt;span class="c1"&gt;// specify joiner, reducers, etc.&lt;/span&gt;
        &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;// later in the topology&lt;/span&gt;
&lt;span class="nc"&gt;KStream&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;...&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;joinedKStream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;listJoin&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;apply&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;myLeftKStream&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;myRightKTable&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This topology is probably still not optimal. For one, it may make sense to deduplicate the empty list records to further reduce downstream work. However, in our case this is not necessary.&lt;/p&gt;

&lt;p&gt;In any case, this new approach is a clear improvement over the previous timestamp based approach.&lt;/p&gt;

&lt;p&gt;Have you solved a similar, or even the same, problem before? If so, please consider leaving a comment as I'd very much like to hear what you did. Likewise, if you have found issues with the above code or have ideas for improvements, I'm keen to hear from you!&lt;/p&gt;
&lt;h2&gt;
  
  
  Acknowledgments
&lt;/h2&gt;

&lt;p&gt;Cover image: &lt;a href="https://www.pexels.com/photo/close-up-photo-of-blue-background-2441454/" rel="noopener noreferrer"&gt;"Close Up Photo of Blue Background" by Harrison  Candlin&lt;/a&gt;&lt;/p&gt;


&lt;div class="ltag__user ltag__user__id__2732274"&gt;
    &lt;a href="/jankleine" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F2732274%2Ff6769039-9e89-4990-b520-8f1f95714d22.jpg" alt="jankleine image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/jankleine"&gt;Jan Kleine&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/jankleine"&gt;&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;






&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;Maintaining state can be done via a &lt;a href="https://kafka.apache.org/documentation/streams/developer-guide/processor-api.html#state-stores" rel="noopener noreferrer"&gt;state store&lt;/a&gt;, which is automatically backed up to a changelog topic in kafka, ensuring the processor can tolerate application scaling and recover in case of application failures. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>eventdriven</category>
      <category>kafka</category>
      <category>java</category>
    </item>
    <item>
      <title>Data Governance with dbt, Terraform, and Dataplex: A Practical Guide to BigQuery Policy Tags</title>
      <dc:creator>Diana Tahchieva</dc:creator>
      <pubDate>Mon, 24 Feb 2025 07:39:48 +0000</pubDate>
      <link>https://forem.com/ipt/data-governance-with-dbt-terraform-and-dataplex-a-practical-guide-to-bigquery-policy-tags-5f7d</link>
      <guid>https://forem.com/ipt/data-governance-with-dbt-terraform-and-dataplex-a-practical-guide-to-bigquery-policy-tags-5f7d</guid>
      <description>&lt;p&gt;Welcome to a hands-on guide for implementing BigQuery Policy Tags, an important feature for data governance. If you're new to Google Cloud Platform (GCP) and have heard of dbt, Terraform, and Data Catalog but aren't sure how they work together, this tutorial provides a simple, practical example. We'll apply policy tags to a sample clients table in BigQuery to enforce data governance. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Are Policy Tags?&lt;/strong&gt;&lt;br&gt;
Policy Tags are classification labels for data in &lt;a href="https://cloud.google.com/bigquery?hl=en" rel="noopener noreferrer"&gt;BigQuery&lt;/a&gt;, helping manage privacy, compliance, and access control. These tags are particularly important in industries like healthcare and finance, where data sensitivity is a key concern.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Use dbt, Terraform, and Dataplex for Policy Tag Management?&lt;/strong&gt;&lt;br&gt;
Using dbt and Terraform to define Policy Tags as code, and Dataplex for governance, you can keep track of changes, facilitate team collaboration, audit activities, and easily roll back to previous configurations.&lt;br&gt;
Additionally, you can consistently manage Policy Tags across multiple datasets and projects, reducing manual labor. Automation through these tools minimizes human error and ensures policy tags are consistently applied.&lt;br&gt;
While dbt integrates seamlessly into existing data pipelines, applying Policy Tags during data transformation and modeling, dataplex unifies governance across various data stores.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding the Tools&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let us have a closer look at what are dbt, terraform and dataplex.&lt;br&gt;
• &lt;a href="https://docs.getdbt.com/" rel="noopener noreferrer"&gt;dbt&lt;/a&gt; (Data Build Tool) is an open-source tool for transforming and modeling data within your data warehouse (like BigQuery). dbt enables you to write a more maintainable SQL code and allows you to attach metadata, such as Policy Tags, to your data transformations. Additionally, as the objects in BigQuery can be referenced, dbt is able to make a Directed Acyclic Graph of the entire data platform, based on which we can observe all dependencies among the data. &lt;br&gt;
• &lt;a href="https://developer.hashicorp.com/terraform" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; is an Infrastructure as Code (IaC) tool that lets you define and manage your cloud infrastructure using configuration files. Terraform automates the provisioning and management of resources including enabling APIs, managing permissions, and creating Policy Taxonomies and Tags.&lt;br&gt;
• &lt;a href="https://cloud.google.com/dataplex?hl=en" rel="noopener noreferrer"&gt;Dataplex&lt;/a&gt; is a Google Cloud service that provides unified data governance and management. It helps discover, organize, and manage data assets, ensuring consistent data handling and enforcement of Policy Tags.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How They Work Together&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;OK, now that we know what dbt, Terraform, and Dataplex are, let's explore how they work together.&lt;br&gt;
Terraform establishes the required infrastructure and permissions for managing Policy Tags by creating taxonomies and tags within Google Cloud Data Catalog. At the same time, dbt handles data transformation in BigQuery, applying Policy Tags to specific columns within your models. The meta section in the dbt model facilitates metadata association, ensuring proper organization and governance. Meanwhile, Dataplex functions as a centralized governance layer, maintaining consistency in the application and monitoring of Policy Tags across all data assets. Together, these tools create a seamless, scalable, and automated data governance system that enhances visibility, reduces manual effort, and minimizes the risk of human error.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjfjnb67mkktbqgsr8b7x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjfjnb67mkktbqgsr8b7x.png" alt="The figure illustrates how Terraform, Dataplex, BigQuery, and dbt work together for automated data governance. Terraform provisions infrastructure and Policy Tags, while dbt transforms data and applies tags in BigQuery. Dataplex enforces policies and monitors compliance, ensuring secure and consistent data management." width="376" height="618"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementing Policy Tags: Step-by-Step Guide&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, you need to create a GCP project and connect it to a billing account; otherwise, Terraform won’t be able to function. Don’t worry—this project is small and won’t incur any costs (my billing account is still at zero). Just remember to delete it after testing by running &lt;code&gt;terraform destroy&lt;/code&gt; in the command line—I’ll remind you at the end of the tutorial.&lt;br&gt;
For convenience, I’ve created a Git project that you can &lt;a href="https://github.com/dianaTahchieva/gcp-data-governance.git" rel="noopener noreferrer"&gt;clone&lt;/a&gt;. It contains two sub-projects: one for Terraform (gcp-data-catalog-terraform) and one for dbt (data_catalog_dbt_project). In a real-world scenario, these sub-projects would likely be managed as separate projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The terraform project has the following structure:&lt;/strong&gt;&lt;br&gt;
• &lt;strong&gt;variables.tf&lt;/strong&gt;: Defines the GCP project ID and region.&lt;br&gt;
• &lt;strong&gt;iam.tf&lt;/strong&gt;: Creates service accounts for Terraform and dbt, assigning necessary IAM roles.&lt;br&gt;
• &lt;strong&gt;datacatalog.tf&lt;/strong&gt;: Defines the taxonomy for organizing Policy Tags and creates tags for PII and non-PII data.&lt;br&gt;
• &lt;strong&gt;bigquery.tf&lt;/strong&gt;: Creates a BigQuery dataset and a table without predefined policy tags.&lt;br&gt;
• &lt;strong&gt;output.tf&lt;/strong&gt;: Outputs IDs for easy access to created resources.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;dbt project&lt;/strong&gt; in this example is a significantly simplified version of a standard dbt project and includes only the essential files necessary for our use case. Here’s a brief description of each file and its purpose:&lt;br&gt;
• &lt;strong&gt;dbt_project.yml&lt;/strong&gt;: This is the primary configuration file for your dbt project. It includes metadata about the project, such as the project name and version, paths to model files, and configurations for materializations and other project-wide settings.&lt;br&gt;
• &lt;strong&gt;profiles.yml&lt;/strong&gt;: This configuration file contains the connection details and credentials required for dbt to connect to your data warehouse (BigQuery in this case). It includes information such as the project ID, dataset, and authentication method (service account key file).&lt;br&gt;
• &lt;strong&gt;models/customers/&lt;/strong&gt;: This directory holds the models for the project. In dbt, a model is essentially a SQL file that transforms raw data into more refined tables. This directory contains our specific model for customers.&lt;br&gt;
o   &lt;strong&gt;customers.sql&lt;/strong&gt;: This SQL file represents the transformation logic for the customers’ data. It selects and processes the necessary columns from the raw data, applying the transformations required for our data analysis needs.&lt;br&gt;
o   &lt;strong&gt;customers.yml&lt;/strong&gt;: This YAML file provides additional metadata about the customers model. It includes descriptions of each column, tests to ensure data quality, and policy tags to enforce data governance rules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Enable APIs with Terraform&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, enable the necessary Google Cloud APIs for your project using Terraform. These are: &lt;br&gt;
• Identity and Access Management (IAM) API&lt;br&gt;
• BigQuery &lt;br&gt;
• Data Catalog API &lt;br&gt;
• Dataplex API&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Grants Permissions with Terraform&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let us walk through how to create and configure a service account that Terraform and dbt can use to interact with Google Cloud resources.&lt;br&gt;
A service account in Google Cloud is like a robot user—it allows Terraform and dbt to authenticate and interact with GCP without needing a human user to log in.&lt;br&gt;
In our setup, Terraform will provision BigQuery datasets, tables, and policies, while dbt will query and transform the data. To ensure that the permissions for Terraform and dbt are properly separated, we will define two distinct service accounts.&lt;/p&gt;

&lt;p&gt;We can create the new service accounts (called &lt;strong&gt;terraform-sa&lt;/strong&gt; and &lt;strong&gt;dbt-sa&lt;/strong&gt;) by running:&lt;br&gt;
&lt;code&gt;gcloud iam service-accounts create terraform-sa --display-name "Terraform Service Account" --project gcp-data-governance&lt;/code&gt;&lt;br&gt;
&lt;code&gt;gcloud iam service-accounts create dbt-sa --display-name "dbt Service Account" --project gcp-data-governance&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Alternatively, you can add the service account manually in the GCP console (IAM &amp;amp; Admin &amp;gt; Service Accounts). Then, in Terraform, you only manage the IAM roles in &lt;strong&gt;iam.tf&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Terraform service account needs permissions to manage BigQuery, Data Catalog, and Dataplex. Add these IAM roles in your Terraform configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Give the Terraform user permissions to manage IAM, Dataplex, and BigQuery&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"google_project_iam_member"&lt;/span&gt; &lt;span class="s2"&gt;"terraform_bigquery_admin"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project_id&lt;/span&gt;
  &lt;span class="nx"&gt;role&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"roles/bigquery.admin"&lt;/span&gt;
  &lt;span class="nx"&gt;member&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt;  &lt;span class="s2"&gt;"serviceAccount:terraform-sa@gcp-data-governance.iam.gserviceaccount.com"&lt;/span&gt; 
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"google_project_iam_member"&lt;/span&gt; &lt;span class="s2"&gt;"terraform_datacatalog_admin"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project_id&lt;/span&gt;
  &lt;span class="nx"&gt;role&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"roles/datacatalog.admin"&lt;/span&gt;
  &lt;span class="nx"&gt;member&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"serviceAccount:terraform-sa@gcp-data-governance.iam.gserviceaccount.com"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"google_project_iam_member"&lt;/span&gt; &lt;span class="s2"&gt;"terraform_dataplex_admin"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project_id&lt;/span&gt;
  &lt;span class="nx"&gt;role&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"roles/dataplex.admin"&lt;/span&gt;
  &lt;span class="nx"&gt;member&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"serviceAccount:terraform-sa@gcp-data-governance.iam.gserviceaccount.com"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"google_project_iam_member"&lt;/span&gt; &lt;span class="s2"&gt;"terraform_sa_admin"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project_id&lt;/span&gt;
  &lt;span class="nx"&gt;role&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"roles/iam.serviceAccountAdmin"&lt;/span&gt;
  &lt;span class="nx"&gt;member&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"serviceAccount:terraform-sa@gcp-data-governance.iam.gserviceaccount.com"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: For more information on the available roles, refer to the &lt;a href="https://cloud.google.com/iam/docs/understanding-roles" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;. As a best practice, always assign the minimum permissions necessary. For instance, if a user only needs to view the data, provide read-only access.&lt;/p&gt;

&lt;p&gt;• To allow dbt to authenticate, we need to generate a JSON &lt;strong&gt;key file&lt;/strong&gt; for our service account, which we can do in two ways:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option 1&lt;/strong&gt;: Using the Google Cloud Console&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Go to IAM &amp;amp; Admin → Service Accounts in the Google Cloud Console.&lt;/li&gt;
&lt;li&gt; Select the service account dbt-sa.&lt;/li&gt;
&lt;li&gt; Navigate to the Keys tab and click "Add Key".&lt;/li&gt;
&lt;li&gt; Choose "JSON", then click "Create".&lt;/li&gt;
&lt;li&gt; The key.json file will be downloaded to your computer.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Option 2&lt;/strong&gt;: Using the gcloud CLI&lt;/p&gt;

&lt;p&gt;If you prefer using the command line, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud iam service-accounts keys create dbt-sa-key.json  &lt;span class="nt"&gt;--iam-account&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dbt-sa@your-project-id.iam.gserviceaccount.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;• Now that we have our dbt-sa-key.json, we need to  update dbt's configuration to use the service account. Open &lt;strong&gt;profiles.yml&lt;/strong&gt; and update the value of your keyfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;data_catalog_dbt&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;dev&lt;/span&gt;
  &lt;span class="nx"&gt;outputs&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;dev&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;bigquery&lt;/span&gt;
      &lt;span class="nx"&gt;method&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;service-account&lt;/span&gt;
      &lt;span class="nx"&gt;project&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"gcp-data-governance"&lt;/span&gt;  &lt;span class="c1"&gt;# project_id&lt;/span&gt;
      &lt;span class="nx"&gt;dataset&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;multiplexer_dataset&lt;/span&gt;
      &lt;span class="nx"&gt;threads&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;
      &lt;span class="nx"&gt;keyfile&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"/path/to/dbt-sa-key.json"&lt;/span&gt;  &lt;span class="c1"&gt;# Reference your environment variable&lt;/span&gt;
      &lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"europe-west6"&lt;/span&gt;  &lt;span class="c1"&gt;# Set this to your BigQuery region&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Create Data Policy Taxonomies and Tags with Terraform&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To allow Terraform to interact with Google Data Catalog we need to specify providers. The providers allow us to manage resources on a specific cloud platform. We have created them in &lt;strong&gt;datacatalog.tf&lt;/strong&gt; as they are relevant to Data Catolg API, however you can make a separate file providers.tf and define them there. The google provider is the standard Terraform provider for managing GCP resources, while the google-beta provider gives access to Google Data Catalog, which is needed for policy tags.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Not Just Use google-beta?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Some resources (e.g., IAM, BigQuery datasets) don’t need google-beta, so we keep google for those. On the other hand side Data Catalog resources require google-beta, so we configure both providers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;google&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/google"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 4.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"google"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project_id&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;region&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"google-beta"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project_id&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;region&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A taxonomy is a container containing data policy tags ( a grouping mechanism). &lt;strong&gt;Fine-Grained Access Control&lt;/strong&gt; specified in the activated policy tags ensures that only authorized users can see or query certain columns.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Create a Data Catalog Taxonomy&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"google_data_catalog_taxonomy"&lt;/span&gt; &lt;span class="s2"&gt;"multiplexer_pii_taxonomy"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;google-beta&lt;/span&gt;
  &lt;span class="nx"&gt;display_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Multiplexer PII Taxonomy"&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Taxonomy for sensitive data classification"&lt;/span&gt;
  &lt;span class="nx"&gt;project&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project_id&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;region&lt;/span&gt;
  &lt;span class="nx"&gt;activated_policy_types&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"FINE_GRAINED_ACCESS_CONTROL"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In our test-case we define only two tags: sensitive (PII) and non-sensitive data. The PII tags ensure proper access control. These tags will later be attached to BigQuery columns in dbt. More tags can be added as needed for better governance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Create Policy Tags for PII and Non-PII&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"google_data_catalog_policy_tag"&lt;/span&gt; &lt;span class="s2"&gt;"pii_sensitive"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;google-beta&lt;/span&gt;
  &lt;span class="nx"&gt;taxonomy&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;google_data_catalog_taxonomy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;multiplexer_pii_taxonomy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;display_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"PII"&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Policy tag for Personally Identifiable Information"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"google_data_catalog_policy_tag"&lt;/span&gt; &lt;span class="s2"&gt;"non_pii_sensitive"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;google-beta&lt;/span&gt;
  &lt;span class="nx"&gt;taxonomy&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;google_data_catalog_taxonomy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;multiplexer_pii_taxonomy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;display_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Non-PII"&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Policy tag for non-sensitive data"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Apply Policy Tags with dbt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that the Policy Tags are defined, we will attach them to relevant columns in the dbt project:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Set up your dbt project in a directory parallel to your Terraform project.
Before running dbt, ensure that your service account key is correctly set (see explanation in &lt;strong&gt;Step 2&lt;/strong&gt;). Remember, dbt &lt;strong&gt;profile.yml&lt;/strong&gt; should be configured to use BigQuery and your service account key.
Before running dbt, make sure all dependencies are installed:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dbt deps –upgrade
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Afterwards check if dbt can connect to BigQuery, by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dbt debug
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If successful, you’ll see the message: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;All checks passed!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To run all dbt models use &lt;code&gt;dbt run&lt;/code&gt; and to run only specific models, e.g., customers use &lt;code&gt;dbt run -s customers&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Define columns and attach Policy Tags in your dbt models.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Navigate to your dbt project and modify your customers.yml to attach Policy Tags:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;version&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;

&lt;span class="nx"&gt;models&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;customers&lt;/span&gt;
    &lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"Customers data with PII"&lt;/span&gt;
    &lt;span class="nx"&gt;meta&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;columns&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;customer_id&lt;/span&gt;
        &lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"Unique customer identifier"&lt;/span&gt;
        &lt;span class="nx"&gt;policy_tags&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
          &lt;span class="nx"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"{{ var('non_pii_sensitive_policy_tag_id') }}"&lt;/span&gt;
      &lt;span class="nx"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;email&lt;/span&gt;
        &lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"Customer email"&lt;/span&gt;
        &lt;span class="nx"&gt;policy_tags&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
          &lt;span class="nx"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"{{ var('pii_sensitive_policy_tag_id') }}"&lt;/span&gt;
      &lt;span class="nx"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;phone_number&lt;/span&gt;
        &lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"Customer phone number"&lt;/span&gt;
        &lt;span class="nx"&gt;policy_tags&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
          &lt;span class="nx"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"{{ var('pii_sensitive_policy_tag_id') }}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run dbt to apply the Policy Tags to BigQuery automatically: &lt;code&gt;dbt run -s customers&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Integrating dbt, Terraform, and Dataplex allows you to efficiently manage BigQuery Policy Tags, enforcing data governance policies in a scalable and automated way. This approach enhances security, compliance, and operational efficiency while reducing manual effort.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To avoid unnecessary charges, remember to destroy your project after testing by running &lt;code&gt;terraform destroy&lt;/code&gt; in the command line.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>datagovernance</category>
      <category>googlecloud</category>
      <category>terraform</category>
      <category>dbt</category>
    </item>
    <item>
      <title>Rusty Backends</title>
      <dc:creator>Jan Kleine</dc:creator>
      <pubDate>Tue, 21 Jan 2025 06:30:00 +0000</pubDate>
      <link>https://forem.com/ipt/rusty-backends-3551</link>
      <guid>https://forem.com/ipt/rusty-backends-3551</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This post was written together &lt;a class="mentioned-user" href="https://dev.to/sekael"&gt;@sekael&lt;/a&gt; and &lt;a class="mentioned-user" href="https://dev.to/zkck"&gt;@zkck&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Rust has been a developer favorite for many years now and is consistently highly admired by developers&lt;a href="https://survey.stackoverflow.co/2023/#section-admired-and-desired-programming-scripting-and-markup-languages" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;. It made a buzz when it was introduced as a complementary language to C and Assembly in the Linux kernel with version 6.8&lt;a href="https://en.wikipedia.org/wiki/Rust_for_Linux" rel="noopener noreferrer"&gt;[2]&lt;/a&gt; and its speed and memory safety make it a popular choice for low-level and performance-critical applications. But with the whole world using the internet, we wanted to find out whether it can also do web and challenge the reigning backend champions like Java Spring or Go.&lt;/p&gt;

&lt;p&gt;To find answers, we wanted to get our hands dirty with three popular Rust web frameworks including &lt;a href="https://rocket.rs/" rel="noopener noreferrer"&gt;rocket&lt;/a&gt;, &lt;a href="https://github.com/tokio-rs/axum" rel="noopener noreferrer"&gt;axum&lt;/a&gt;, and &lt;a href="https://actix.rs/" rel="noopener noreferrer"&gt;actix&lt;/a&gt;, and get a feeling for their performance, features, and most importantly the developer experience.&lt;/p&gt;

&lt;p&gt;In this post, we will examine each of these frameworks by implementing the same example API endpoints in each one, connect to a MongoDB database, and run the server in a Docker container. &lt;/p&gt;

&lt;p&gt;All three, rocket, axum, and actix, cover the full range of functionality you would expect from a web framework, like routes, handlers, request and response parsing, middleware, state management, database interactions, logging, and testing. Furthermore, all of these frameworks perform well and will meet and surpass the needs of most web applications, so the choice really comes down to ergonomics and how it feels to develop within each framework.&lt;/p&gt;

&lt;p&gt;So let’s jump right in…&lt;/p&gt;

&lt;h2&gt;
  
  
  Rocket (&lt;a href="https://rocket.rs/" rel="noopener noreferrer"&gt;rocket.rs&lt;/a&gt;)
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/rwf2" rel="noopener noreferrer"&gt;
        rwf2
      &lt;/a&gt; / &lt;a href="https://github.com/rwf2/Rocket" rel="noopener noreferrer"&gt;
        Rocket
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A web framework for Rust.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Rocket&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="https://github.com/rwf2/Rocket/actions" rel="noopener noreferrer"&gt;&lt;img src="https://github.com/rwf2/Rocket/workflows/CI/badge.svg" alt="Build Status"&gt;&lt;/a&gt;
&lt;a href="https://rocket.rs" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/7d0b5bf90043737d85dec76c33fcefdc3d8b6f9d16c2c661c34bdb49f6f79cf9/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f7765622d726f636b65742e72732d7265642e7376673f7374796c653d666c6174266c6162656c3d687474707326636f6c6f72423d643333383437" alt="Rocket Homepage"&gt;&lt;/a&gt;
&lt;a href="https://crates.io/crates/rocket" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/acb274b13536de976ca8dcc0000e83d44bed4020529c7013bfbbc90636e5e7df/68747470733a2f2f696d672e736869656c64732e696f2f6372617465732f762f726f636b65742e737667" alt="Current Crates.io Version"&gt;&lt;/a&gt;
&lt;a href="https://chat.mozilla.org/#/room/#rocket:mozilla.org" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/2616d5193f1cd46c72cec148694e33d2bf8628dd926d1591f6b9730778adcd79/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f7374796c652d253233726f636b65743a6d6f7a696c6c612e6f72672d626c75652e7376673f7374796c653d666c6174266c6162656c3d2535426d253544" alt="Matrix: #rocket:mozilla.org"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Rocket is an async web framework for Rust with a focus on usability, security
extensibility, and speed.&lt;/p&gt;
&lt;div class="highlight highlight-source-rust notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-c1"&gt;#&lt;span class="pl-kos"&gt;[&lt;/span&gt;macro_use&lt;span class="pl-kos"&gt;]&lt;/span&gt;&lt;/span&gt; &lt;span class="pl-k"&gt;extern&lt;/span&gt; &lt;span class="pl-k"&gt;crate&lt;/span&gt; rocket&lt;span class="pl-kos"&gt;;&lt;/span&gt;

&lt;span class="pl-c1"&gt;#&lt;span class="pl-kos"&gt;[&lt;/span&gt;get&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-s"&gt;"/&amp;lt;name&amp;gt;/&amp;lt;age&amp;gt;"&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;]&lt;/span&gt;&lt;/span&gt;
&lt;span class="pl-k"&gt;fn&lt;/span&gt; &lt;span class="pl-en"&gt;hello&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-s1"&gt;name&lt;/span&gt;&lt;span class="pl-kos"&gt;:&lt;/span&gt; &lt;span class="pl-c1"&gt;&amp;amp;&lt;/span&gt;&lt;span class="pl-smi"&gt;str&lt;/span&gt;&lt;span class="pl-kos"&gt;,&lt;/span&gt; &lt;span class="pl-s1"&gt;age&lt;/span&gt;&lt;span class="pl-kos"&gt;:&lt;/span&gt; &lt;span class="pl-smi"&gt;u8&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt; -&amp;gt; &lt;span class="pl-smi"&gt;String&lt;/span&gt; &lt;span class="pl-kos"&gt;{&lt;/span&gt;
    &lt;span class="pl-en"&gt;format&lt;/span&gt;&lt;span class="pl-en"&gt;!&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-s"&gt;"Hello, {} year old named {}!"&lt;/span&gt;&lt;span class="pl-kos"&gt;,&lt;/span&gt; age&lt;span class="pl-kos"&gt;,&lt;/span&gt; name&lt;span class="pl-kos"&gt;)&lt;/span&gt;
&lt;span class="pl-kos"&gt;}&lt;/span&gt;

&lt;span class="pl-c1"&gt;#&lt;span class="pl-kos"&gt;[&lt;/span&gt;launch&lt;span class="pl-kos"&gt;]&lt;/span&gt;&lt;/span&gt;
&lt;span class="pl-k"&gt;fn&lt;/span&gt; &lt;span class="pl-en"&gt;rocket&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt; -&amp;gt; &lt;span class="pl-smi"&gt;_&lt;/span&gt; &lt;span class="pl-kos"&gt;{&lt;/span&gt;
    rocket&lt;span class="pl-kos"&gt;::&lt;/span&gt;&lt;span class="pl-en"&gt;build&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;.&lt;/span&gt;&lt;span class="pl-en"&gt;mount&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-s"&gt;"/hello"&lt;/span&gt;&lt;span class="pl-kos"&gt;,&lt;/span&gt; &lt;span class="pl-en"&gt;routes&lt;/span&gt;&lt;span class="pl-en"&gt;!&lt;/span&gt;&lt;span class="pl-kos"&gt;[&lt;/span&gt;hello&lt;span class="pl-kos"&gt;]&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt;
&lt;span class="pl-kos"&gt;}&lt;/span&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Visiting &lt;code&gt;localhost:8000/hello/John/58&lt;/code&gt;, for example, will trigger the &lt;code&gt;hello&lt;/code&gt;
route resulting in the string &lt;code&gt;Hello, 58 year old named John!&lt;/code&gt; being sent to the
browser. If an &lt;code&gt;&amp;lt;age&amp;gt;&lt;/code&gt; string was passed in that can't be parsed as a &lt;code&gt;u8&lt;/code&gt;, the
route won't get called, resulting in a 404 error.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Documentation&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Rocket is extensively documented:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://rocket.rs/overview/" rel="nofollow noopener noreferrer"&gt;Overview&lt;/a&gt;: A brief…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/rwf2/Rocket" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Rocket is a web framework for Rust that comes with the most batteries included out of the three we looked at. It positions itself in the same league as Rails (Ruby) or Flask (Python) and aims to offer functionality for anything a web backend might need, including route handling, request and response parsing, database integrations, validation, and so on. One of its main goals is to minimize the amount of boiler plate code you will have to write, and it does so through metaprogramming, i.e. heavy use of Rust macros.&lt;/p&gt;

&lt;p&gt;This makes it easy for the developer to integrate her actual logic with the rocket framework. Let’s have a quick look on how you might define a route handler including parameter guards and validation, a database connection pool, and a JSON object.&lt;/p&gt;

&lt;p&gt;We can define a route handler simply by adding the appropriate macro to a function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="nd"&gt;#[get(&lt;/span&gt;&lt;span class="s"&gt;"/texts/&amp;lt;uuid&amp;gt;"&lt;/span&gt;&lt;span class="nd"&gt;)]&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;get_text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Connection&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;TextsDatabase&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;uuid&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Uuid&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;Status&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nd"&gt;json!&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="s"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;}))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If the request parameter cannot be parsed, the appropriate response code is sent instead of calling the function, so we have the usual comfort of type safety. Request bodies are handled very similarly, and of course, parsing integrates seamlessly with &lt;a href="https://serde.rs/" rel="noopener noreferrer"&gt;serde&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="nd"&gt;#[derive(Deserialize)]&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nv"&gt;'m&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nv"&gt;'m&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nd"&gt;#[post(&lt;/span&gt;&lt;span class="s"&gt;"/texts"&lt;/span&gt;&lt;span class="nd"&gt;,&lt;/span&gt; &lt;span class="nd"&gt;format&lt;/span&gt; &lt;span class="nd"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"application/json"&lt;/span&gt;&lt;span class="nd"&gt;,&lt;/span&gt; &lt;span class="nd"&gt;data&lt;/span&gt; &lt;span class="nd"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"&amp;lt;msg&amp;gt;"&lt;/span&gt;&lt;span class="nd"&gt;)]&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;post_text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Connection&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;TextsDatabase&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Json&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nv"&gt;'_&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Adding a database is as simple as annotating the appropriate struct and initializing it at startup. Of course many popular DBMS's are supported.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="nd"&gt;#[derive(Database)]&lt;/span&gt;
&lt;span class="nd"&gt;#[database(&lt;/span&gt;&lt;span class="s"&gt;"texts"&lt;/span&gt;&lt;span class="nd"&gt;)]&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="nf"&gt;TextsDatabase&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;mongodb&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Rocket goes heavy on the nerdy space vocabulary - which adds to its charm. Thus, to get your web server off the ground, you simply launch it. Notice again, how annotations are used to get the rocket going.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="nd"&gt;#[launch]&lt;/span&gt;
&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;rocket&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nn"&gt;rocket&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;build&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.attach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;TextsDatabase&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
        &lt;span class="nf"&gt;.mount&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nd"&gt;routes!&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;get_text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;post_text&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We have found rocket very enjoyable to work with, especially if you want to focus on the business logic of your server and feel safe trusting the framework to take handling of the boilerplate code off your hands. If you are rather new to Rust and maybe switching over from other frameworks that make heavy use of annotations, you will feel at home with Rocket.&lt;/p&gt;
&lt;h2&gt;
  
  
  Axum
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/tokio-rs" rel="noopener noreferrer"&gt;
        tokio-rs
      &lt;/a&gt; / &lt;a href="https://github.com/tokio-rs/axum" rel="noopener noreferrer"&gt;
        axum
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Ergonomic and modular web framework built with Tokio, Tower, and Hyper
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;axum&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;&lt;code&gt;axum&lt;/code&gt; is a web application framework that focuses on ergonomics and modularity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/tokio-rs/axum/actions/workflows/CI.yml" rel="noopener noreferrer"&gt;&lt;img src="https://github.com/tokio-rs/axum/actions/workflows/CI.yml/badge.svg?branch=main" alt="Build status"&gt;&lt;/a&gt;
&lt;a href="https://crates.io/crates/axum" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/0636f88c6507dba7ce401faa57637af8a9237dbdcad132c323579743d4091ab9/68747470733a2f2f696d672e736869656c64732e696f2f6372617465732f762f6178756d" alt="Crates.io"&gt;&lt;/a&gt;
&lt;a href="https://docs.rs/axum" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/194681819e1e75555dd9124488f6f2362c8a691049c849aa6d73b2de0a4ff0d3/68747470733a2f2f646f63732e72732f6178756d2f62616467652e737667" alt="Documentation"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;More information about this crate can be found in the &lt;a href="https://docs.rs/axum" rel="nofollow noopener noreferrer"&gt;crate documentation&lt;/a&gt;.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;High level features&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Route requests to handlers with a macro free API.&lt;/li&gt;
&lt;li&gt;Declaratively parse requests using extractors.&lt;/li&gt;
&lt;li&gt;Simple and predictable error handling model.&lt;/li&gt;
&lt;li&gt;Generate responses with minimal boilerplate.&lt;/li&gt;
&lt;li&gt;Take full advantage of the &lt;a href="https://crates.io/crates/tower" rel="nofollow noopener noreferrer"&gt;&lt;code&gt;tower&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://crates.io/crates/tower-http" rel="nofollow noopener noreferrer"&gt;&lt;code&gt;tower-http&lt;/code&gt;&lt;/a&gt; ecosystem of
middleware, services, and utilities.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In particular the last point is what sets &lt;code&gt;axum&lt;/code&gt; apart from other frameworks
&lt;code&gt;axum&lt;/code&gt; doesn't have its own middleware system but instead uses
&lt;a href="https://docs.rs/tower/latest/tower/trait.Service.html" rel="nofollow noopener noreferrer"&gt;&lt;code&gt;tower::Service&lt;/code&gt;&lt;/a&gt;. This means &lt;code&gt;axum&lt;/code&gt; gets timeouts, tracing, compression,
authorization, and more, for free. It also enables you to share middleware with
applications written using &lt;a href="https://crates.io/crates/hyper" rel="nofollow noopener noreferrer"&gt;&lt;code&gt;hyper&lt;/code&gt;&lt;/a&gt; or &lt;a href="https://crates.io/crates/tonic" rel="nofollow noopener noreferrer"&gt;&lt;code&gt;tonic&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Usage example&lt;/h2&gt;

&lt;/div&gt;
&lt;div class="highlight highlight-source-rust notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-k"&gt;use&lt;/span&gt; axum&lt;span class="pl-kos"&gt;::&lt;/span&gt;&lt;span class="pl-kos"&gt;{&lt;/span&gt;
    routing&lt;span class="pl-kos"&gt;::&lt;/span&gt;&lt;span class="pl-kos"&gt;{&lt;/span&gt;get&lt;span class="pl-kos"&gt;,&lt;/span&gt; post&lt;span class="pl-kos"&gt;}&lt;/span&gt;&lt;span class="pl-kos"&gt;,&lt;/span&gt;
    http&lt;span class="pl-kos"&gt;::&lt;/span&gt;&lt;span class="pl-v"&gt;StatusCode&lt;/span&gt;&lt;span class="pl-kos"&gt;,&lt;/span&gt;
    &lt;span class="pl-v"&gt;Json&lt;/span&gt;&lt;span class="pl-kos"&gt;,&lt;/span&gt; &lt;span class="pl-v"&gt;Router&lt;/span&gt;&lt;span class="pl-kos"&gt;,&lt;/span&gt;
&lt;span class="pl-kos"&gt;}&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;
&lt;span class="pl-k"&gt;use&lt;/span&gt; serde&lt;span class="pl-kos"&gt;::&lt;/span&gt;&lt;span class="pl-kos"&gt;{&lt;/span&gt;&lt;span class="pl-v"&gt;Deserialize&lt;/span&gt;&lt;span class="pl-kos"&gt;,&lt;/span&gt;&lt;/pre&gt;…
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/tokio-rs/axum" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;If you are looking for a framework that is more bare-metal Rust and comes with less batteries included, you may want to check out the axum framework developed by tokio-rs, the same team that is responsible for the tokio async library. Opting for axum means you will have to take care of some more of the boilerplate code but you are also not fighting a potentially opinionated framework to get your logic just the way you want it. Compared to rocket, axum does not depend heavily on macros or annotations, and it sets itself apart from its peers by not implementing its own middleware but rather relying on tower for this. Through the tower ecosystem axum can offer timeouts, tracing, compression, authorization, and much more, while also enabling you to share your middleware with applications written with other web libraries like hyper.&lt;/p&gt;

&lt;p&gt;Looking at axum’s ergonomics, previous Go developers will feel right at home. The syntax looks very similar to a web server written e.g. with the gorilla/mux module in Go. So how would you go about the implementation of your first route handler including connecting to a database and starting the server? Let’s have a look.&lt;/p&gt;

&lt;p&gt;The same two functions from above look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;post_text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nf"&gt;State&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="n"&gt;State&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;Arc&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nn"&gt;state&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;MongoAppState&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nf"&gt;Json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text_payload&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="n"&gt;Json&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nn"&gt;payloads&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;TextPayload&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;StatusCode&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Json&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nn"&gt;payloads&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;InsertedResponse&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;StatusCode&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Json&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nn"&gt;payloads&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ErrorResponse&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;get_text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nf"&gt;State&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="n"&gt;State&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;Arc&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nn"&gt;state&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;MongoAppState&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nf"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text_id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="n"&gt;Path&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Json&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nn"&gt;payloads&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;TextPayload&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;StatusCode&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Json&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nn"&gt;payloads&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ErrorResponse&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;No macros, but a lot more verbose. We see the same when we look at the startup code, we have to do a lot more manually, e.g., pass the DB connection along.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="nd"&gt;#[tokio::main]&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nn"&gt;anyhow&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;mongodb&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;with_uri_str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"mongodb://..."&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="nf"&gt;.unwrap&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;shared_state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;sync&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;Arc&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;state&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;MongoAppState&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

    &lt;span class="c1"&gt;// build our application with a single route&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/texts"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;post_text&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="nf"&gt;.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/texts/:text_id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;get_text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.delete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;delete_text&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="nf"&gt;.with_state&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;shared_state&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// run our app with hyper, listening globally on port 3000&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;listener&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;tokio&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;net&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;TcpListener&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"0.0.0.0:3000"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="nf"&gt;.unwrap&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nn"&gt;axum&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;serve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;listener&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="nf"&gt;.unwrap&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(())&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Axum is a very powerful web framework that benefits from the expertise of the tokio-rs team in creating great Rust tooling. It is being very actively developed and relies on other state-of-the-art crates for middleware. You will most likely enjoy axum the most if you are already familiar with Rust, enjoy having control over a lot of detail in implementation, or maybe if you are switching from another low-level web language like Go.&lt;/p&gt;
&lt;h2&gt;
  
  
  Actix (&lt;a href="https://actix.rs/" rel="noopener noreferrer"&gt;actix.rs&lt;/a&gt;)
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/actix" rel="noopener noreferrer"&gt;
        actix
      &lt;/a&gt; / &lt;a href="https://github.com/actix/actix-web" rel="noopener noreferrer"&gt;
        actix-web
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Actix Web is a powerful, pragmatic, and extremely fast web framework for Rust.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div&gt;
  &lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Actix Web&lt;/h1&gt;
&lt;/div&gt;


&lt;p&gt;&lt;br&gt;
    &lt;strong&gt;Actix Web is a powerful, pragmatic, and extremely fast web framework for Rust&lt;/strong&gt;&lt;br&gt;
  &lt;/p&gt;
&lt;br&gt;
  

&lt;p&gt;&lt;a href="https://crates.io/crates/actix-web" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/9d0f6286b07978ec075fbf14cec9a77aef18d902038dfad49a1c144f3144a48b/68747470733a2f2f696d672e736869656c64732e696f2f6372617465732f762f61637469782d7765623f6c6162656c3d6c6174657374" alt="crates.io"&gt;&lt;/a&gt;
&lt;a href="https://docs.rs/actix-web/4.10.2" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/092e02d184fb4dcc590316d01c8d42e08673d216c6c912f010ba0aa40d026523/68747470733a2f2f646f63732e72732f61637469782d7765622f62616467652e7376673f76657273696f6e3d342e31302e32" alt="Documentation"&gt;&lt;/a&gt;
&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/7016575f331e1c1ea989b89d469d6d0ad702336ff951ad732350ce3ea51ba8c2/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f72757374632d312e37322b2d6162363030302e737667"&gt;&lt;img src="https://camo.githubusercontent.com/7016575f331e1c1ea989b89d469d6d0ad702336ff951ad732350ce3ea51ba8c2/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f72757374632d312e37322b2d6162363030302e737667" alt="MSRV"&gt;&lt;/a&gt;
&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/05b9b2ec090e401a08316dd7dccf4899f66da4053fcdfab7dae79cd6498cbdca/68747470733a2f2f696d672e736869656c64732e696f2f6372617465732f6c2f61637469782d7765622e737667"&gt;&lt;img src="https://camo.githubusercontent.com/05b9b2ec090e401a08316dd7dccf4899f66da4053fcdfab7dae79cd6498cbdca/68747470733a2f2f696d672e736869656c64732e696f2f6372617465732f6c2f61637469782d7765622e737667" alt="MIT or Apache 2.0 licensed"&gt;&lt;/a&gt;
&lt;a href="https://deps.rs/crate/actix-web/4.10.2" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/b23691383d2d696d127d8b907ed42ccdc56112aa9d1a540688628585db6cd008/68747470733a2f2f646570732e72732f63726174652f61637469782d7765622f342e31302e322f7374617475732e737667" alt="Dependency Status"&gt;&lt;/a&gt;
&lt;br&gt;
&lt;a href="https://github.com/actix/actix-web/actions/workflows/ci.yml" rel="noopener noreferrer"&gt;&lt;img src="https://github.com/actix/actix-web/actions/workflows/ci.yml/badge.svg" alt="CI"&gt;&lt;/a&gt;
&lt;a href="https://codecov.io/gh/actix/actix-web" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/2569e52cdb414899ac7f7603a71440db28ace74ddb5921963163711939e6a874/68747470733a2f2f636f6465636f762e696f2f67682f61637469782f61637469782d7765622f67726170682f62616467652e7376673f746f6b656e3d6453774f6e7039514376" alt="codecov"&gt;&lt;/a&gt;
&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/1217bf1d5bf2fb8a566022771a18bc71619911a7404fda8cb95202393e138df4/68747470733a2f2f696d672e736869656c64732e696f2f6372617465732f642f61637469782d7765622e737667"&gt;&lt;img src="https://camo.githubusercontent.com/1217bf1d5bf2fb8a566022771a18bc71619911a7404fda8cb95202393e138df4/68747470733a2f2f696d672e736869656c64732e696f2f6372617465732f642f61637469782d7765622e737667" alt="downloads"&gt;&lt;/a&gt;
&lt;a href="https://discord.gg/NWpN5mmg3x" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/fdd6540432dffa2f99734a2d9279ef1ae31e81c93213039de8e1c2db082a5dfb/68747470733a2f2f696d672e736869656c64732e696f2f646973636f72642f3737313434343936313338333135333639353f6c6162656c3d63686174266c6f676f3d646973636f7264" alt="Chat on Discord"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Features&lt;/h2&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Supports &lt;em&gt;HTTP/1.x&lt;/em&gt; and &lt;em&gt;HTTP/2&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Streaming and pipelining&lt;/li&gt;
&lt;li&gt;Powerful &lt;a href="https://actix.rs/docs/url-dispatch/" rel="nofollow noopener noreferrer"&gt;request routing&lt;/a&gt; with optional macros&lt;/li&gt;
&lt;li&gt;Full &lt;a href="https://tokio.rs" rel="nofollow noopener noreferrer"&gt;Tokio&lt;/a&gt; compatibility&lt;/li&gt;
&lt;li&gt;Keep-alive and slow requests handling&lt;/li&gt;
&lt;li&gt;Client/server &lt;a href="https://actix.rs/docs/websockets/" rel="nofollow noopener noreferrer"&gt;WebSockets&lt;/a&gt; support&lt;/li&gt;
&lt;li&gt;Transparent content compression/decompression (br, gzip, deflate, zstd)&lt;/li&gt;
&lt;li&gt;Multipart streams&lt;/li&gt;
&lt;li&gt;Static assets&lt;/li&gt;
&lt;li&gt;SSL support using OpenSSL or Rustls&lt;/li&gt;
&lt;li&gt;Middlewares (&lt;a href="https://actix.rs/docs/middleware/" rel="nofollow noopener noreferrer"&gt;Logger, Session, CORS, etc&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Integrates with the &lt;a href="https://docs.rs/awc/" rel="nofollow noopener noreferrer"&gt;&lt;code&gt;awc&lt;/code&gt; HTTP client&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Runs on stable Rust 1.72+&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Documentation&lt;/h2&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://actix.rs" rel="nofollow noopener noreferrer"&gt;Website &amp;amp; User Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/actix/examples" rel="noopener noreferrer"&gt;Examples Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.rs/actix-web" rel="nofollow noopener noreferrer"&gt;API Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://actix.rs/actix-web/actix_web" rel="nofollow noopener noreferrer"&gt;API Documentation (master branch)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Example&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Dependencies:&lt;/p&gt;
&lt;div class="highlight highlight-source-toml notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;[&lt;span class="pl-en"&gt;dependencies&lt;/span&gt;]
&lt;span class="pl-smi"&gt;actix-web&lt;/span&gt; = &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;4&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Code:&lt;/p&gt;
&lt;div class="highlight highlight-source-rust notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-k"&gt;use&lt;/span&gt; actix_web&lt;span class="pl-kos"&gt;::&lt;/span&gt;&lt;span class="pl-kos"&gt;{&lt;/span&gt;get&lt;span class="pl-kos"&gt;,&lt;/span&gt; web&lt;span class="pl-kos"&gt;,&lt;/span&gt; &lt;span class="pl-v"&gt;App&lt;/span&gt;&lt;span class="pl-kos"&gt;,&lt;/span&gt; &lt;span class="pl-v"&gt;HttpServer&lt;/span&gt;&lt;span class="pl-kos"&gt;,&lt;/span&gt; &lt;span class="pl-v"&gt;Responder&lt;/span&gt;&lt;span class="pl-kos"&gt;}&lt;/span&gt;&lt;span class="pl-kos"&gt;;&lt;/span&gt;
&lt;span class="pl-c1"&gt;#&lt;span class="pl-kos"&gt;[&lt;/span&gt;get&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-s"&gt;"/hello/{name}"&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt;&lt;span class="pl-kos"&gt;]&lt;/span&gt;&lt;/span&gt;
&lt;span class="pl-k"&gt;async&lt;/span&gt; &lt;span class="pl-k"&gt;fn&lt;/span&gt; &lt;span class="pl-en"&gt;greet&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-s1"&gt;name&lt;/span&gt;&lt;span class="pl-kos"&gt;:&lt;/span&gt; web&lt;span class="pl-kos"&gt;::&lt;/span&gt;&lt;span class="pl-smi"&gt;Path&lt;/span&gt;&lt;span class="pl-kos"&gt;&amp;lt;&lt;/span&gt;&lt;span class="pl-smi"&gt;String&lt;/span&gt;&lt;span class="pl-kos"&gt;&amp;gt;&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt; -&amp;gt; &lt;span class="pl-k"&gt;impl&lt;/span&gt; &lt;span class="pl-smi"&gt;Responder&lt;/span&gt; &lt;span class="pl-kos"&gt;{&lt;/span&gt;
    &lt;span class="pl-en"&gt;format&lt;/span&gt;&lt;span class="pl-en"&gt;!&lt;/span&gt;&lt;span class="pl-kos"&gt;(&lt;/span&gt;&lt;span class="pl-s"&gt;"Hello {name}!"&lt;/span&gt;&lt;span class="pl-kos"&gt;)&lt;/span&gt;
&lt;span class="pl-kos"&gt;}&lt;/span&gt;

&lt;span class="pl-c1"&gt;#&lt;span class="pl-kos"&gt;[&lt;/span&gt;actix_web&lt;span class="pl-kos"&gt;::&lt;/span&gt;main&lt;/span&gt;&lt;/pre&gt;…
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/actix/actix-web" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Last but not least we want to take a look at actix-web. It combines aspects from both previous frameworks, rocket as well as axum, which is resembled in its ergonomics. Annotations are back on the menu and macros are more heavily used, especially in the definition of route handlers. When implementing middleware or database connections, however, you will not find the same level of abstraction as you might with rocket. Actix is extremely fast &lt;a href="https://www.techempower.com/benchmarks/#section=data-r22&amp;amp;test=composite&amp;amp;hw=ph" rel="noopener noreferrer"&gt;3&lt;/a&gt; and it aims to cater to both experienced Rust developers as well as newcomers who are just starting with Rust development. It ships with its own middleware, e.g. for logging, session management, or cross-origin resource sharing, and allows you to expand the framework with your own middleware that can hook into actix. Let’s have a look at how route handlers, database connections, and starting the server are handled in actix.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="nd"&gt;#[post(&lt;/span&gt;&lt;span class="s"&gt;"/texts"&lt;/span&gt;&lt;span class="nd"&gt;)]&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;post_text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nn"&gt;web&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Data&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Client&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nn"&gt;web&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Json&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;TextResponse&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;impl&lt;/span&gt; &lt;span class="n"&gt;Responder&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nd"&gt;#[get(&lt;/span&gt;&lt;span class="s"&gt;"/texts/{uuid}"&lt;/span&gt;&lt;span class="nd"&gt;)]&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;get_text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nn"&gt;web&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Data&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Client&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;uuid&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nn"&gt;web&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Path&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Uuid&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;impl&lt;/span&gt; &lt;span class="n"&gt;Responder&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Again, we have a lot less boilerplate. Though launching the server falls in between the first two examples with regards to verbosity.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="nd"&gt;#[actix_web::main]&lt;/span&gt; &lt;span class="c1"&gt;// or #[tokio::main]&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;io&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;db_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;with_uri_str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"mongodb://...."&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="nf"&gt;.expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"failed to connect"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nn"&gt;HttpServer&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;move&lt;/span&gt; &lt;span class="p"&gt;||&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nn"&gt;App&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="nf"&gt;.wrap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;Logger&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;default&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
            &lt;span class="nf"&gt;.app_data&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;web&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;Data&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;db_client&lt;/span&gt;&lt;span class="nf"&gt;.clone&lt;/span&gt;&lt;span class="p"&gt;()))&lt;/span&gt;
            &lt;span class="nf"&gt;.service&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;post_text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;.service&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;get_text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="nf"&gt;.bind&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="s"&gt;"0.0.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;
    &lt;span class="nf"&gt;.run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;.await&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The benchmarks speak for themselves, actix is indeed blazingly fast! And that does not come at the cost of developer experience, which actix manages to keep to a very high standard. It ships with more batteries included than axum, but does not abstract everything quite as much as rocket. If you already feel comfortable writing programs in Rust and other low-level web programming languages, you will enjoy developing in actix, and your user will enjoy the performance!&lt;/p&gt;
&lt;h3&gt;
  
  
  Acknolegements
&lt;/h3&gt;

&lt;p&gt;Selim wrote the initial draft of this post and Selim, Zak, and I each implemented the sample API using one of the above frameworks.&lt;/p&gt;

&lt;p&gt;Cover image: &lt;a href="https://www.pexels.com/photo/brown-chains-114108/" rel="noopener noreferrer"&gt;"Brown Chains" by Miguel Á. Padriñán&lt;/a&gt;&lt;/p&gt;


&lt;div class="ltag__user ltag__user__id__2524490"&gt;
    &lt;a href="/sekael" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F2524490%2F2a0fb332-56a6-4702-83cd-df476e44ef5a.png" alt="sekael image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/sekael"&gt;Selim Kaelin&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/sekael"&gt;I am a mountain athlete at heart, enjoy learning about and writing code, and passionate about Rust, Cloud, and maps!&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;



&lt;div class="ltag__user ltag__user__id__2477319"&gt;
    &lt;a href="/zkck" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F2477319%2Fe4c56b64-a970-4d41-86f6-9f6933fbc0c9.jpeg" alt="zkck image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/zkck"&gt;Zak Cook&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/zkck"&gt;&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;



&lt;div class="ltag__user ltag__user__id__2732274"&gt;
    &lt;a href="/jankleine" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F2732274%2Ff6769039-9e89-4990-b520-8f1f95714d22.jpg" alt="jankleine image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/jankleine"&gt;Jan Kleine&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/jankleine"&gt;&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;



</description>
      <category>rust</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Two Years in the Vault: 4 Best Practices 🔒</title>
      <dc:creator>Zak Cook</dc:creator>
      <pubDate>Mon, 09 Dec 2024 08:30:00 +0000</pubDate>
      <link>https://forem.com/ipt/two-years-in-the-vault-4-best-practices-4acb</link>
      <guid>https://forem.com/ipt/two-years-in-the-vault-4-best-practices-4acb</guid>
      <description>&lt;p&gt;I work as an IT consultant. Over the past two years, we've been working with our client on a &lt;a href="https://en.wikipedia.org/wiki/Platform_engineering" rel="noopener noreferrer"&gt;platform engineering&lt;/a&gt; project, where the use of &lt;a href="https://www.vaultproject.io/" rel="noopener noreferrer"&gt;HashiCorp Vault&lt;/a&gt; for &lt;a href="https://www.redhat.com/en/topics/devops/what-is-secrets-management" rel="noopener noreferrer"&gt;secrets management&lt;/a&gt; was prevalent. Our Vault has enabled us to manage hundreds of credentials, increasing the security of our developer platform, and has resulted in a thorough and battle-tested configuration of our Vault which we can be proud of.&lt;/p&gt;

&lt;p&gt;However, over the past two years, there have been quite a few learning moments. Vault is a great product with lots of flexibility, but this opens up a lot of possibilities for misconfiguration or misuse. For us, that meant a few tedious reconfigurations of our Vault. This post aims to share learnings that I wish we had known from the very beginning, in the form into digestible tips.&lt;/p&gt;

&lt;p&gt;These tips are for everyone that needs to work with HashiCorp Vault, whether it be as a developer, or as an administrator (you may still learn something). If you don't use Vault, maybe simply bookmark this post, it may come in handy in the future.&lt;/p&gt;

&lt;p&gt;TL;DR:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configure with an infrastructure-as-code (IaC) tool&lt;/li&gt;
&lt;li&gt;Policies describe permissions&lt;/li&gt;
&lt;li&gt;KVv2: save raw data, format elsewhere&lt;/li&gt;
&lt;li&gt;Beware of write permissions on &lt;code&gt;sys/policy&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Consider templated policies (upcoming)&lt;/li&gt;
&lt;li&gt;KVv2: make secrets "connection bundles" (upcoming)&lt;/li&gt;
&lt;li&gt;Utilize &lt;code&gt;-output-policy&lt;/code&gt; from the CLI (upcoming)&lt;/li&gt;
&lt;li&gt;Think about your path structure (upcoming)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is a two-part series, I'll cover the first four tips here, and the remaining four in an upcoming post. Stay tuned.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Glossary:&lt;/strong&gt; I overuse the words &lt;em&gt;component&lt;/em&gt; and &lt;em&gt;system&lt;/em&gt; a lot. This could refer to microservices, short running jobs, monolith servers, which all run together, to form some sort of workload. If you're running on Kubernetes, think of a &lt;em&gt;component&lt;/em&gt; as a Deployment, and a &lt;em&gt;system&lt;/em&gt; as the Kubernetes cluster.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Tip 1: Configure with an infrastructure-as-code (IaC) tool
&lt;/h2&gt;

&lt;p&gt;When first starting to integrate a Vault into a large system, one naturally does explorative work locally with the Vault CLI. This is all good, but concretizing that configuration into infrastructure-as-code (IaC) is essential, and by doing it early you'd avoid the pain of migrating your configuration down the road.&lt;/p&gt;

&lt;p&gt;Potential solutions for IaC are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt;/&lt;a href="https://opentofu.org/" rel="noopener noreferrer"&gt;OpenTofu&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.pulumi.com/" rel="noopener noreferrer"&gt;Pulumi&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For configuring your Vault, I would generally recommend going for Terraform/OpenTofu, which takes the declarative approach as opposed to Pulumi. Pulumi is a good solution when you have lots of dependencies in a complex system, which need to be handled programmatically.&lt;/p&gt;

&lt;p&gt;We started off on our Vault journey by simply documenting how we configured our secrets engines, auth methods and policies. So when we had to set up a new Vault, that meant going through that documentation and running the commands one-by-one. We at some point (painfully) changed to Terraform/OpenTofu, and it directly opened up many doors for us:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Writing a testing framework, where we would test the access to Vault paths by simulating the components in our system, allowing us to check for any regressions&lt;/li&gt;
&lt;li&gt;Reusing the IaC to configure Vaults in multiple environments&lt;/li&gt;
&lt;li&gt;Integrate with CI pipelines to automate updates to our Vaults&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There's many more advantages. Point is, start with IaC directly, so you don't have to go through the same process as we did, of integrating an already configured Vault with IaC. In case you have to: &lt;a href="https://developer.hashicorp.com/terraform/language/import" rel="noopener noreferrer"&gt;Terraform Import Blocks&lt;/a&gt; saved our lives :)&lt;/p&gt;

&lt;h2&gt;
  
  
  Tip 2: Policies describe permissions
&lt;/h2&gt;

&lt;p&gt;Policies are core to HashiCorp Vault, being one of the pillars in its access control mechanisms. A Vault policy is a set of permissions, each being a combination of a path and capabilities on that path. Efficient management of policies is important in order to keep your Vault lean and scalable.&lt;/p&gt;

&lt;p&gt;Let's consider a very basic Vault, which has a &lt;a href="https://developer.hashicorp.com/vault/docs/secrets/kv/kv-v2" rel="noopener noreferrer"&gt;KVv2 engine&lt;/a&gt; under &lt;code&gt;secrets/&lt;/code&gt;, used to store secret recipes under the &lt;code&gt;recipes/&lt;/code&gt; subpath. The following policy could be the "head chef" policy, allowing the head chef to browse the entire secrets engine and update any of the secret recipes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vault policy write head-chef - &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
# browse secrets
path "secrets/metadata/*" {
  capabilities = ["read", "list"]
}
# manage recipes
path "secrets/data/recipes/*" {
  capabilities = ["create", "update", "read", "delete"]
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We would then attach this policy to a role with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault write auth/approle/role/head-chef token_policies=head-chef
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we decided that the name of the policy would match the role, namely &lt;code&gt;head-chef&lt;/code&gt;. This works, but we have found that as policies become larger and more roles are added to the Vault, &lt;strong&gt;breaking down policies into smaller, more meaningful sets of permissions&lt;/strong&gt; becomes increasingly important. &lt;strong&gt;Policy names should also describe the permissions encapsulated&lt;/strong&gt;, and should not refer back to the role that uses it.&lt;/p&gt;

&lt;p&gt;To help breaking down policies, it is useful to decide on a &lt;strong&gt;naming standard&lt;/strong&gt; for your policies. An example would be &lt;code&gt;&amp;lt;verb&amp;gt;-&amp;lt;subject of policy&amp;gt;&lt;/code&gt;, like &lt;code&gt;read-nuclear-codes&lt;/code&gt;. In this example we would have to break down the policy a bit more in order for our convention to apply. Let's split the &lt;code&gt;head-chef&lt;/code&gt; policy into two policies, &lt;code&gt;browse-secrets&lt;/code&gt; and &lt;code&gt;manage-recipes&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vault policy write browse-secrets - &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
# browse secrets
path "secrets/metadata/*" {
  capabilities = ["read", "list"]
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;vault policy write manage-recipes - &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
# manage recipes
path "secrets/data/recipes/*" {
  capabilities = ["create", "update", "read", "delete"]
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;vault write auth/approle/role/head-chef &lt;span class="nv"&gt;token_policies&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;browse-secrets,manage-recipes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is now much more transparent what permissions are associated with this role. In addition, breaking down the policy made parts of the policy reusable, as we could now do something like the following, reusing the &lt;code&gt;browse-secrets&lt;/code&gt; policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vault policy write manage-greek-recipes - &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
# manage greek recipes
path "secrets/data/recipes/greek/*" {
  capabilities = ["create", "update", "read", "delete"]
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;vault write auth/approle/role/chef-george &lt;span class="nv"&gt;token_policies&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;browse-secrets,manage-greek-recipes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deciding on this naming convention for our policies allowed us to scale to many more roles in our Vault. One thing to consider though, is that paths may end up overlapping among the policies assigned to a role. This requires being aware of how Vault's &lt;a href="https://developer.hashicorp.com/vault/docs/concepts/policies#priority-matching" rel="noopener noreferrer"&gt;priority matching&lt;/a&gt; works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tip 3: KVv2: save raw data, format elsewhere
&lt;/h2&gt;

&lt;p&gt;Vault's KVv2 engine allows for saving static secrets in the form of key-value pairs. Choosing the appropriate key-value pairs to save in a secret will both maximize flexibility in how your infrastructure consumes these secrets and would minimize duplicated data.&lt;/p&gt;

&lt;p&gt;Let's say you have a component which requires credentials to a database, and needs them provided inside a config, e.g. in TOML format. Since the config holds secrets, it may be tempting to save the entire config in Vault's KVv2 engine, and fetch that config when starting your component:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vault kv put secrets/components/my-component config.toml&lt;span class="o"&gt;=&lt;/span&gt;- &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
[database]
host = "database.example.com"
username = "postgres"
password = "1234"
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, this approach is limiting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reusing the database credentials means copy-pasting to another config, and doesn't allow for a source of truth&lt;/li&gt;
&lt;li&gt;Any non-secret config changes need to go over the Vault&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This can be avoided: the Vault ecosystem has a large amount of helper components which support templating, among which:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developer.hashicorp.com/vault/docs/agent-and-proxy/agent/template" rel="noopener noreferrer"&gt;Vault agent templating&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://external-secrets.io/latest/guides/templating/" rel="noopener noreferrer"&gt;External Secrets Operator templating&lt;/a&gt; in case you're running on Kubernetes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As an example let's make use of an &lt;code&gt;ExternalSecret&lt;/code&gt; for our templating. First, let's save the &lt;strong&gt;standalone credentials&lt;/strong&gt; instead of our config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vault kv put secrets/databases/my-database &lt;span class="nv"&gt;username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;postgres &lt;span class="nv"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1234
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we'll write an &lt;code&gt;ExternalSecret&lt;/code&gt; making use of this raw data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-secrets.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ExternalSecret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# ... (reference to SecretStore)&lt;/span&gt;
  &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;engineVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v2&lt;/span&gt;
      &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;config.toml&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;[database]&lt;/span&gt;
          &lt;span class="s"&gt;host = "database.example.com"&lt;/span&gt;
          &lt;span class="s"&gt;username = {{ .username | quote }}&lt;/span&gt;
          &lt;span class="s"&gt;password = {{ .password | quote }}&lt;/span&gt;
  &lt;span class="na"&gt;dataFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;extract&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/secrets/databases/my-database&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This would then render a &lt;code&gt;Secret&lt;/code&gt; resource with the following content under the &lt;code&gt;config.toml&lt;/code&gt; key, which can be mounted in our component:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[database]&lt;/span&gt;
&lt;span class="py"&gt;host&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"database.example.com"&lt;/span&gt;
&lt;span class="py"&gt;username&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"postgres"&lt;/span&gt;
&lt;span class="py"&gt;password&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"1234"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The structure of our KVv2 engine is now cleaner, as it is not aware of the components that access it, and holds sensible reusable secrets. Another option is to tweak how the component is configured. Instead of taking credentials directly, our component could take a Vault path instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[database]&lt;/span&gt;
&lt;span class="py"&gt;host&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"database.example.com"&lt;/span&gt;
&lt;span class="py"&gt;vault_path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"secrets/databases/my-database"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This however means a dependency on HashiCorp Vault in the code of the application itself, as the code now uses the Vault API in order to retrieve the secret. Some applications may want to be agnostic to the type of software storing the credentials, in which case a templating solution would be more suitable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tip 4: Beware of write permissions on &lt;code&gt;sys/policy&lt;/code&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Understanding this tip is a bit more involved, but please bear with me. It is arguably the most important tip in this series, especially in environments where security is critical.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Vaults used in a complex system sometimes need to support new users being asynchronously added to the Vault: for example adding a new role/user which has access to a subdirectory of a KVv2 secrets engine. To automate this, companies may resort to building some management component which automates the creation of a role/user and policies attached to it.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://developer.hashicorp.com/vault/api-docs/system/policy#create-update-policy" rel="noopener noreferrer"&gt;&lt;code&gt;sys/policy&lt;/code&gt;&lt;/a&gt; (or &lt;a href="https://developer.hashicorp.com/vault/api-docs/system/policies#create-update-acl-policy" rel="noopener noreferrer"&gt;&lt;code&gt;sys/policies/acl&lt;/code&gt;&lt;/a&gt;) path is used to grant permissions on managing policies in Vault, and is necessary to create policies on the fly. We will see in this chapter how granting write permissions on &lt;code&gt;sys/policy&lt;/code&gt; &lt;strong&gt;can lead to severe security risks&lt;/strong&gt;, and provide some examples on how to mitigate these risks.&lt;/p&gt;

&lt;p&gt;As an example, let's consider our Vault which is meant to manage recipes. Our company wants to support employees requesting their own subdirectory in our &lt;code&gt;secrets/&lt;/code&gt; KVv2 engine. This would involve the employees sending an API request to a component, let's call it the &lt;code&gt;employee-manager&lt;/code&gt;, that would need to execute following commands on the Vault upon receiving one of these requests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# create policy for employee&lt;/span&gt;
vault policy write &lt;span class="s2"&gt;"manage-recipes-&lt;/span&gt;&lt;span class="nv"&gt;$EMPLOYEE_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; - &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
path "secrets/data/recipes/&lt;/span&gt;&lt;span class="nv"&gt;$EMPLOYEE_NAME&lt;/span&gt;&lt;span class="sh"&gt;/*" {
  capabilities = ["create", "read", "update", "delete"]
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="c"&gt;# create role for employee with attached policy&lt;/span&gt;
vault write auth/approle/role/&lt;span class="nv"&gt;$EMPLOYEE_NAME&lt;/span&gt; &lt;span class="nv"&gt;token_policies&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"manage-recipes-&lt;/span&gt;&lt;span class="nv"&gt;$EMPLOYEE_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This would require the &lt;code&gt;employee-manager&lt;/code&gt; component to have the necessary permissions to create both roles and policies for our employees. Let's setup a role and policy to allow the component to execute the above commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# create "create-policies", "create-roles" policies&lt;/span&gt;
vault policy write &lt;span class="s2"&gt;"create-policies"&lt;/span&gt; - &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
path "sys/policy/+" {
  capabilities = ["create", "update"]
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;vault policy write &lt;span class="s2"&gt;"create-roles"&lt;/span&gt; - &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
path "auth/approle/role/+" {
  capabilities = ["create", "update"]
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="c"&gt;# attach policies to role&lt;/span&gt;
vault write auth/approle/role/employee-manager &lt;span class="nv"&gt;token_policies&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;create-policies,create-roles
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above permissions are &lt;strong&gt;extremely dangerous&lt;/strong&gt; to provide to any component or user of Vault, especially in conjunction.&lt;/p&gt;

&lt;p&gt;Imagine our &lt;code&gt;employee-manager&lt;/code&gt; component was insecure, and is compromised by an attacker. The attacker then uses the component's credentials to log in to Vault. They can now &lt;strong&gt;create new policies, with any permissions, and assign those permissions to the &lt;code&gt;employee-manager&lt;/code&gt; which they have control over&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# create malicious policy&lt;/span&gt;
vault policy write &lt;span class="s2"&gt;"read-secrets"&lt;/span&gt; - &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
path "secrets/data/*" {
  capabilities = ["read"]
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="c"&gt;# update the compromised role with malicious policy&lt;/span&gt;
vault write auth/approle/role/employee-manager &lt;span class="nv"&gt;token_policies&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;create-policies,create-roles,read-secrets

&lt;span class="c"&gt;# after logging in again, the attacker could execute:&lt;/span&gt;
vault &lt;span class="nb"&gt;read &lt;/span&gt;secrets/data/recipes/gordon/super-secret-meatballs-recipe
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So the component must technically be considered as &lt;strong&gt;admin&lt;/strong&gt; on the Vault, as it can create policies for absolutely anything and assign it to itself. To mitigate this security risk, there are the following solutions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Avoid granting permissions on &lt;code&gt;sys/policy&lt;/code&gt; altogether and consider &lt;strong&gt;using &lt;a href="https://developer.hashicorp.com/vault/docs/concepts/policies#templated-policies" rel="noopener noreferrer"&gt;templated policies&lt;/a&gt;&lt;/strong&gt;. This is often the cleanest solution, and has a big advantage of reducing the amount of configuration.&lt;/li&gt;
&lt;li&gt;Grant permissions on a &lt;strong&gt;subfolder of &lt;code&gt;sys/policy&lt;/code&gt;&lt;/strong&gt;. This subfolder must be disjoint from the policies granted to &lt;code&gt;employee-manager&lt;/code&gt;, and &lt;code&gt;employee-manager&lt;/code&gt; should not be able to &lt;code&gt;update&lt;/code&gt; its own role.&lt;/li&gt;
&lt;li&gt;Grant only &lt;code&gt;create&lt;/code&gt; permissions on anything under &lt;code&gt;sys/policy&lt;/code&gt;, and &lt;code&gt;employee-manager&lt;/code&gt; should not be able to &lt;code&gt;update&lt;/code&gt; its own role.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There may be more mitigation options, but the point of this chapter is to consider the event of a breach and be aware of the &lt;strong&gt;effective permissions&lt;/strong&gt; that you may be giving to your system components.&lt;/p&gt;

&lt;p&gt;That's all the tips for now. Thanks for reading, I hope you learned something, and stay tuned for the second half and other posts to come.&lt;/p&gt;

</description>
      <category>hashicorp</category>
      <category>vault</category>
      <category>security</category>
      <category>devops</category>
    </item>
    <item>
      <title>A Very Deep Dive Into Docker Builds</title>
      <dc:creator>Jakob Beckmann</dc:creator>
      <pubDate>Tue, 26 Nov 2024 06:25:41 +0000</pubDate>
      <link>https://forem.com/ipt/a-very-deep-dive-into-docker-builds-270n</link>
      <guid>https://forem.com/ipt/a-very-deep-dive-into-docker-builds-270n</guid>
      <description>&lt;p&gt;Containers are everywhere. From Kubernetes for orchestrating deployments and simplifing operations to Dev Containers for flexible yet reproducible development environments. Yet, while they are ubiquitous, images are often built sub-optimally. In this post we will be looking at a full example of a Docker build for a Python application and what best practices to consider.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disclaimer
&lt;/h2&gt;

&lt;p&gt;This is a real world example from an very small component we built in Python for one of our clients. Very few alterations were made to the original configuration (changing URLs, and removing Email addresses mostly). We will go in depth as to why we did every single little thing. While some stuff is quite Python-centric, the same principles apply to other languages, and the text should be broad enough so that it is understandable how to transfer this example to different languages.&lt;/p&gt;

&lt;p&gt;Also, this is a &lt;em&gt;long&lt;/em&gt; article, so if you actually plan on reading it, grab yourself a snack and a fresh drink first.&lt;/p&gt;

&lt;h2&gt;
  
  
  Goal
&lt;/h2&gt;

&lt;p&gt;The goal of this post is to showcase how one can setup a Docker build that is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;fully reproducible,&lt;/li&gt;
&lt;li&gt;as fast as possible,&lt;/li&gt;
&lt;li&gt;fails early on code issues,&lt;/li&gt;
&lt;li&gt;isolates testing from deployed code,&lt;/li&gt;
&lt;li&gt;is secure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The example we will use for this implements quite a lot to ensure only quality code reaches production, and that it can do so as fast as possible. Going all the way might not be necessary for all projects using Docker. For instance, if you release code to production only once a day (or less) you might care less about release build cache optimization. This example is however meant to show the "extreme" to which you can push Docker, so that you can (in theory) push code to production fully continuously (CI/CD principles). But yeah ...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdf1jdml0g5g4uk006jus.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdf1jdml0g5g4uk006jus.jpeg" alt="A meme that one does not simply push docker images to production" width="568" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why
&lt;/h2&gt;

&lt;p&gt;Why do we have these goals? Reproducible builds are one of the most important factors for proper compliance, and for easier debugging. Debugging is simpler, since we ensure that no matter the environment, date, or location of the build, if it succeeds, the same input generates the same output. Moreover, it brings stability, as a pipeline might not suddenly fail on a nightly build (if you still do such things) because a new upstream library or program was released that is used somewhere in your supply chain.&lt;/p&gt;

&lt;p&gt;Regarding compliance, we need to be able to tell and revert to the exact state of software that was deployed in the past. Without reproducible builds, using Git to track back to a previous state of deployed code does not help you much, because while you can validate what code you deployed, you don't know what versions of everything else you deployed with it.&lt;/p&gt;

&lt;p&gt;Builds should be fast, and fail fast. The reason here is that no one likes to wait. You don't want to wait for 2 hours to figure out whether a tiny code change breaks tests or does not even compile.&lt;/p&gt;

&lt;p&gt;You will want to isolate test code from deployed code, because more code equals more bugs. While testing frameworks are very good at isolating test code from code being tested, writing tests generates a risk of bugs. Moreover, the test code is unneeded bloat for your runtime application. Thus it should be isolated from it.&lt;/p&gt;

&lt;p&gt;Finally security. While some people think that containers improve security by default, this is not the case. Container technology has the potential to indeed improve the robustness of some security measures and controls. However, in order to achieve this, one needs to correctly utilize containers and build the images with security in mind. For instance, if an image contains certain utilities that allow it to connect to the internet (such as &lt;code&gt;curl&lt;/code&gt; or &lt;code&gt;wget&lt;/code&gt;), it suddenly makes the container much more vulnerable to container escape attacks (where an attacker manages to move from the container to the underlying host), and hence the whole isolation benefit of the container (which can be a security control) is broken. The same is true for containers that contain interpreters and allow the runtime user to open, edit and execute arbitrary files. As our container will contain Python code, and hence the Python interpreter, this is definitely something we need to take very seriously.&lt;/p&gt;

&lt;h2&gt;
  
  
  Python Goals
&lt;/h2&gt;

&lt;p&gt;Our example is based on Python, an interpreted language. This is not ideal, as it means that it does not require a compile step. Compilation optimization is however a very important aspect in Docker builds. In order to still address this, I will talk about this, but will not refer to the configuration examples. One could ask why I did not take a compiled language example then. The reason is very simple, I wanted a real world example such that this post is not just theoretical goodness, and most Golang image builds I am currently working on are more basic and not as educational.&lt;/p&gt;

&lt;p&gt;Yet another question could be "why deploy Python in Docker in the first place?". This is a very legitimate question. Python requires a lot of OS bloat to just be able to run. This means that typically a VM is a good choice to host it. For all those saying that Docker is still better because of performance (due to faster startup, no hardware virtualization overhead, etc): this is not true for such cases where a large part of an OS needs to be in the Docker image. A VM of a full init-based Linux system can be launched in less than 250ms on modern technology. A full Ubuntu installation with systemd can be completely booted in around 2.5 seconds. The former is in the same order of magnitude that it might take the Python interpreter to just load the code of a large Python application.&lt;/p&gt;

&lt;p&gt;So performance cannot be said to be better with Docker, why choose Docker then? Better reasons are that you can strip down a Docker image much easier than an OS. This is critical for us due to security requirements. While Python requires a lot of OS features, the majority of the OS is still bloat. Every piece of bloat is a potential attack vector (each of these unused components might have one or more CVEs that we need to patch, even though we don't even use that software). Another reason is that the build process of Docker is much simpler to manage. There are tools such as &lt;a href="https://www.packer.io/" rel="noopener noreferrer"&gt;Packer&lt;/a&gt; that allow similar processes for VMs, but these are not as standardized as the &lt;a href="https://opencontainers.org/" rel="noopener noreferrer"&gt;open container initiative&lt;/a&gt; (OCI - which Docker adheres to).&lt;/p&gt;

&lt;p&gt;Another very important point is the ease of development. Docker and other OCI compliant products provide us with a possibility to build, test, and run our build artefacts (in this case Docker images) everywhere. This makes it very simple and fast for our developers to test the build and perform a test run of an image locally on their development machine. This would not be quite the case with VMs or raw artefacts (JARs, source code archives, ...). Moreover, the OCI ecosystem does not only include specifications on how to interact with images, but also how to setup and configure critical elements such as persistence and networking. These aspects are made very simple with Docker, and would be quite a pain to manage securely with most other technologies.&lt;/p&gt;

&lt;p&gt;Finally the main reason for us is the choice of runtime. We have very decent container runtimes (&lt;a href="https://www.rancher.com/products/secure-kubernetes-distribution" rel="noopener noreferrer"&gt;RKE&lt;/a&gt;, &lt;a href="https://developers.redhat.com/products/openshift/overview" rel="noopener noreferrer"&gt;RHOS&lt;/a&gt;, &lt;a href="https://k3s.io/" rel="noopener noreferrer"&gt;K3s&lt;/a&gt;) available to deploy applications. We are very familiar with them, and they offer us a lot of functionality. These all support containers primarily.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Tiny Bit of Background
&lt;/h2&gt;

&lt;p&gt;Last before we get into the dirty details, a tiny bit of background into what we are building. The application we will be building here is a sort of a facade reverse proxy. It offers a standardized API to clients, which can connect and perform requests. Based on the content of the request, the component will trigger a routing algorithm that defines where the request needs to be routed. This routing algorithm might require several API calls in the backend to different systems to figure out where the call should go. Once done, the component will relay the call to a backend, and forward the response to the client. The client is never aware that it is talking to more than one component, and only needs to authenticate to that single system. Imagine an API Gateway, but where the routing is extremely complex and requires integration with systems such as Kubernetes, a cloud portal, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Details
&lt;/h2&gt;

&lt;p&gt;Here is an overview of our &lt;code&gt;Dockerfile&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;internal.registry/base/ca-bundle:20220405&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;cert-bundle&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;internal.registry/base/python:3.9.2-slim&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;builder&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=cert-bundle /certs/ /usr/local/share/ca-certificates/&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;update-ca-certificates

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--upgrade&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--ignore-installed&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--trusted-host&lt;/span&gt; pypi.python.org &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--trusted-host&lt;/span&gt; pypi.org &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--trusted-host&lt;/span&gt; files.pythonhosted.org &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nv"&gt;pipenv&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;2024.2.0

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PIPENV_VENV_IN_PROJECT=1&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; Pipfile Pipfile&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; Pipfile.lock Pipfile.lock&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;pipenv &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--deploy&lt;/span&gt;

&lt;span class="c"&gt;### Tester image&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;builder&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;test&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;pipenv &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--dev&lt;/span&gt; &lt;span class="nt"&gt;--deploy&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./pyproject.toml pyproject.toml&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./assets/ ./assets&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./features/ ./features&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./tests/ ./tests&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./src/ ./&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,target&lt;span class="o"&gt;=&lt;/span&gt;./.mypy_cache/ &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,target&lt;span class="o"&gt;=&lt;/span&gt;./.pytest_cache/ &lt;span class="se"&gt;\
&lt;/span&gt;  pipenv run mypy &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; pipenv run black &lt;span class="nt"&gt;--check&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; pipenv run bandit &lt;span class="nt"&gt;-ll&lt;/span&gt; ./&lt;span class="k"&gt;*&lt;/span&gt;.py &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nv"&gt;PYTHONPATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;./ pipenv run pytest


&lt;span class="c"&gt;### Runner image&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; internal.registry/base/distroless-python:3.9.2&lt;/span&gt;
&lt;span class="k"&gt;LABEL&lt;/span&gt;&lt;span class="s"&gt; maintainer="Redacted &amp;lt;redacted-email&amp;gt;"&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app/&lt;/span&gt;
&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; 1000&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder --chown=1000 /app/.venv/lib/python3.9/site-packages ./my-app&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app/my-app&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --chown=1000 ./src/ ./&lt;/span&gt;

&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["python3"]&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["./main.py"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We will go through it line by line and figure out why we did what we did, and why we did not choose a different approach. Let's start!&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;internal.registry/base/ca-bundle:20220405&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;cert-bundle&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In this line we reference a container image for later use and provide it a name alias &lt;code&gt;cert-bundle&lt;/code&gt;. This container image contains only data: our production network proxy certificates and all internal certificate authorities. We need these CAs as we will connect over TLS to backend components that have internal certificates. We also need the production network proxy certificates as we will pull dependencies straight from the internet, and all that traffic is routed over a gateway proxy. Why distribute these certificates over a Docker image instead of a compressed TAR? The main reason is that we want to have a unified way that we build artefacts and manage CI/CD pipelines. By creating and managing the certificates via Docker, we can use our entire Docker setup (such as UCD/Jenkins/Tekton pipelines for building, registry for distribution, quality gates for security, etc) and do not need to have a different system to manage the certificates. Note that we refer to the exact state of the certificate bundle (&lt;code&gt;20220405&lt;/code&gt;), which refers to the state of the certificates per 5th of April 2022. This is very important to make the build reproducible. If we did not pin the version of the certificates, it would mean that we could build the image maybe today, but it would fail tomorrow, once the certificates change (even though we did not change the code at all). You will note that we will pin every single version in the entire build process.&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;internal.registry/base/python:3.9.2-slim&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;builder&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In this line, we reference the base image we will start building from. This is the official Python image for Python version 3.9.2. We use the slim version because we don't need much more than the standard Python installation. We pull this from our own registry, as all Docker images are scanned beforehand to reduce the risk of supply chain attacks. Also here, the version is pinned. We provide this build step the &lt;code&gt;builder&lt;/code&gt; alias. In essence this means that starting from this line we define an image stage that will contain the build process of our application. For Python, this mostly includes downloading dependencies (both software and system level), and injecting the source code, as there will be no compile step.&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=cert-bundle /certs/ /usr/local/share/ca-certificates/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This copies our certificates into our build image. We do this by referencing the build step &lt;code&gt;cert-bundle&lt;/code&gt; (see first line of the &lt;code&gt;Dockerfile&lt;/code&gt; again) in the &lt;code&gt;--from&lt;/code&gt; argument of the &lt;code&gt;COPY&lt;/code&gt; command. Note that we could have referenced the image directly in the &lt;code&gt;--from&lt;/code&gt; argument. We choose to use build stage aliases for visibility, and reduce duplication if the certificates need to be copied into different stages. Note that this copies only the raw certificates. A OS specific bundle would still need to be generated.&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;update-ca-certificates
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Here we do exactly this, we generate a certificate bundle for the underlying OS of our builder image (&lt;a href="https://www.debian.org/" rel="noopener noreferrer"&gt;Debian&lt;/a&gt;). This allows our subsequent build steps to use the certificate bundle to validate host certificates on TLS connections.&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We then set a working directory. The idea is to have a base directory on which we now operate. This can be nearly any working directory, and will be created if non-existent. We choose &lt;code&gt;/app/&lt;/code&gt; by convention. Moreover, note that we tend to reference directories with the trailing &lt;code&gt;/&lt;/code&gt; to make it more explicit that we are referencing directories and not files. We use this convention throughout the configuration/code.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--upgrade&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--ignore-installed&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--trusted-host&lt;/span&gt; pypi.python.org &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--trusted-host&lt;/span&gt; pypi.org &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--trusted-host&lt;/span&gt; files.pythonhosted.org &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nv"&gt;pipenv&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;2024.2.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We use an environment virtualization technology for Python. This is called &lt;a href="https://pipenv.pypa.io/en/latest/index.html" rel="noopener noreferrer"&gt;&lt;code&gt;pipenv&lt;/code&gt;&lt;/a&gt;. It allows us to have many different versions of the same dependency installed locally, without them conflicting. This is very important when you are developing many applications at the same time locally. By running this line we install version &lt;code&gt;2024.2.0&lt;/code&gt; of &lt;code&gt;pipenv&lt;/code&gt; (pinned). Other than Python itself, these are the only tools required for our Python development environment. If we were using a different language, &lt;code&gt;pipenv&lt;/code&gt; would be substituted with your dependency management tool (such as Maven for Java). Note that we only install &lt;code&gt;pipenv&lt;/code&gt; itself, we do not install the dependencies. Also using the flags provided we ensure a fully clean install of &lt;code&gt;pipenv&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is an example where we reach out to the internet and thus needed the network proxy certificates.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A very good question here might be "why to use &lt;code&gt;pipenv&lt;/code&gt; at all, considering it is typically used for environment virtualization, which is already covered by Docker itself?". There are two aspects here. The first is to allow us to lock dependencies using their hash, which is not natively supported by &lt;code&gt;pip&lt;/code&gt; (the standard Python package manager). The second is that we want to keep the build process within Docker as close to the build process outside of it. While we do not build artefacts outside of Docker per-se, the IDEs of our developers need to fall back on these technologies to support features such as library-aware code completion, type-checking, test integration, debugging, etc. This could also be achieved by connecting the IDE to an instance running directly in Docker. This however is relatively complex and requires the setup to support remote debugging. In theory, these are not really problems as long as the dev environments are uniform, but we allow each developer to work with the tools he/she desires to develop code. It then suddenly becomes very difficult to have a stable setup that works for everybody, especially considering that some of our developers do not want/know how to configure their environments to that level (client-server debugger setups, network and volume management between the IDE and Docker, ...).&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PIPENV_VENV_IN_PROJECT=1&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Here we set environment variables for &lt;code&gt;pipenv&lt;/code&gt;. Firstly we want the dependencies to be installed directly in the project repository, not centrally. This allows us to ensure that we do not accidentally copy a system Python dependency that installed by default with the base image. The second configures the certificate bundle we generated in the beginning to be used by &lt;code&gt;pipenv&lt;/code&gt;. It does not use the system configured bundle by default, so it needs to be configured manually here.&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; Pipfile Pipfile&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; Pipfile.lock Pipfile.lock&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now the interesting stuff. Here we copy the dependency files into the image. The first file contains a list of dependencies that we use for our project. The second contains a hash the dependencies should have, including indirect dependencies (dependencies of dependencies), in order to ensure that we always get exactly the same dependency code for very install. The first looks as follows:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[[source]]&lt;/span&gt;
&lt;span class="py"&gt;url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"https://pypi.org/simple"&lt;/span&gt;
&lt;span class="py"&gt;verify_ssl&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="py"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"pypi"&lt;/span&gt;

&lt;span class="nn"&gt;[packages]&lt;/span&gt;
&lt;span class="py"&gt;requests&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="py"&gt;"=&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;2.28&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="py"&gt;"
pydantic = "&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;1.10&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="py"&gt;"
# more dependencies ...

[dev-packages]
black = "&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;23.1&lt;/span&gt;&lt;span class="py"&gt;"
bandit = "&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;1.7&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="py"&gt;"
pytest = "&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;7.2&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="py"&gt;"
pytest-mock = "&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;3.9&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="py"&gt;"
pytest-bdd = "&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;6.1&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="py"&gt;"
mypy = "&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;1.1&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="py"&gt;"
types-Pygments = "&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;2.14&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;0.6&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="c"&gt;# more dev dependencies ...&lt;/span&gt;

&lt;span class="nn"&gt;[requires]&lt;/span&gt;
&lt;span class="py"&gt;python_version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"3.9"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Note that we split dependencies into normal packages we require for our application, and packages only required for testing and our quality gates (&lt;code&gt;[dev-packages]&lt;/code&gt;). This is important later on, as we do not wish to have packages only required for testing in our production Docker image.&lt;/p&gt;

&lt;p&gt;I will not show you an example of the lock file, as it contains mostly checksum hashes. Simply trust me that it contains the exact checksum that every package (such as the dependencies of &lt;code&gt;requests&lt;/code&gt;) has to have to be installed. The reason this is required in the first place, is because the dependencies of &lt;code&gt;requests&lt;/code&gt; are likely not pinned to an exact version and might thus change between installations unless locked via our &lt;code&gt;Pipfile.lock&lt;/code&gt;. This would undesired as it would make our builds un-reproducible. The lock file itself is generated by our developers in two different scenarios. The first is when a library is added due to some new feature. In such a case the new library is added to the &lt;code&gt;Pipfile&lt;/code&gt;, and an installation is triggered outside of Docker. This will install the new library and potentially update already installed ones (in case of conflicts). Hence new hashes will be added to the lock file. The second is on a lifecycle of the existing libraries or of our Python version. In such a case we update the pinned version in the &lt;code&gt;Pipfile&lt;/code&gt; and trigger an installation outside of Docker. Again, &lt;code&gt;pipenv&lt;/code&gt; would then update the direct dependencies, and potentially transitive ones, and update their hashes in the lock file.&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;pipenv &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--deploy&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Here we install the dependencies for our application. The &lt;code&gt;--deploy&lt;/code&gt; flag means that we want to install the dependencies based on the lock file. Moreover, we do not install the dev packages yet, only the ones needed for the production code.&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;### Tester image&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;builder&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Here we generate a new Docker build stage. We have generated a stage with &lt;code&gt;builder&lt;/code&gt; that contains the required certificates and the production dependencies, and nothing more. We now want to test our code and validate quality gates. We do not want to perform this in the &lt;code&gt;builder&lt;/code&gt; stage, because it would pollute our production dependencies. Moreover, using a different stage allows to trigger builds more granularly with &lt;a href="https://docs.docker.com/build/buildkit/" rel="noopener noreferrer"&gt;BuildKit&lt;/a&gt;. For instance, I would be able to configure (with &lt;code&gt;--target=test&lt;/code&gt;) to only build the image up to the &lt;code&gt;test&lt;/code&gt; stage, and skip any later stages (such as the runtime image in our case). This can be very useful in pipelines, for instance, where we want to run the test on every commit, but are not interested in building a real artefact unless the commit is tagged.&lt;/p&gt;

&lt;p&gt;With this line we essentially say "start a new stage called &lt;code&gt;test&lt;/code&gt; from the latest state of &lt;code&gt;builder&lt;/code&gt;". We also add a comment above to make it more visible that we are starting a new stage in the &lt;code&gt;Dockerfile&lt;/code&gt;. Stage comments are typically the only comments we have in the &lt;code&gt;Dockerfile&lt;/code&gt;s.&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;pipenv &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--dev&lt;/span&gt; &lt;span class="nt"&gt;--deploy&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In this line we now deploy the development dependencies, including tools for quality checks (&lt;code&gt;mypy&lt;/code&gt;, &lt;code&gt;bandit&lt;/code&gt;, &lt;code&gt;black&lt;/code&gt;, see below for details) and for testing. Again, we use the &lt;code&gt;--deploy&lt;/code&gt; flag to ensure we always use the same versions to make the build fully reproducible.&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./pyproject.toml pyproject.toml&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./assets/ ./assets&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./features/ ./features&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./tests/ ./tests&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./src/ ./&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Here is the first time we copy actual content, other than the list of dependencies, into our image. This means that up until now all layers in the build process can be fully cached if we perform code changes. Thinking about this is &lt;em&gt;primordial&lt;/em&gt; if you want an efficient build process. Even here, we copy code in reverse order based on likelihood of change. The first file we copy in configures our tooling and quality gates. This is unlikely to change unless we introduce a new tool or change configuration of an existing one. An example of the file can be seen below.&lt;/p&gt;

&lt;p&gt;The second line copies assets. These are used for testing, such as test configurations for configuration validation etc. These are also quite unlikely to change unless we write new tests of our configuration classes.&lt;/p&gt;

&lt;p&gt;The third line copies in our &lt;a href="https://cucumber.io/docs/installation/python/" rel="noopener noreferrer"&gt;Cucumber&lt;/a&gt; files for BDD testing. These change only when we either define new behavioral tests or add features.&lt;/p&gt;

&lt;p&gt;The fourth line copies our test code, this is quite likely to change, as it contains all our unit tests, and the testing framework for behavioral tests.&lt;/p&gt;

&lt;p&gt;Finally the last line copies in our actual code. This, along with the unit tests, is the code that is most likely to change, and thus comes last. This way on a code change, all lines up to this one (assuming we did not add/change tests) can be used from cache.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[tool.black]&lt;/span&gt;
&lt;span class="py"&gt;line-length&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;

&lt;span class="nn"&gt;[tool.pytest.ini_options]&lt;/span&gt;
&lt;span class="py"&gt;pythonpath&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="s"&gt;"src"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="s"&gt;"tests"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="py"&gt;bdd_features_base_dir&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"features/"&lt;/span&gt;

&lt;span class="nn"&gt;[tool.mypy]&lt;/span&gt;
&lt;span class="py"&gt;exclude&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="s"&gt;'^tests/.*\.py$'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="py"&gt;ignore_missing_imports&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="py"&gt;warn_unused_configs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="py"&gt;warn_redundant_casts&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="c"&gt;# more settings ...&lt;/span&gt;

&lt;span class="nn"&gt;[[tool.mypy.overrides]]&lt;/span&gt;
&lt;span class="py"&gt;module&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="s"&gt;"kubernetes"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="s"&gt;"parse_types"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="c"&gt;# skip libraries without stubs&lt;/span&gt;
&lt;span class="py"&gt;ignore_missing_imports&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,target&lt;span class="o"&gt;=&lt;/span&gt;./.mypy_cache/ &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,target&lt;span class="o"&gt;=&lt;/span&gt;./.pytest_cache/ &lt;span class="se"&gt;\
&lt;/span&gt;  pipenv run mypy &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; pipenv run black &lt;span class="nt"&gt;--check&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; pipenv run bandit &lt;span class="nt"&gt;-ll&lt;/span&gt; ./&lt;span class="k"&gt;*&lt;/span&gt;.py &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nv"&gt;PYTHONPATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;./ pipenv run pytest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This line aggregates our quality gates and testing. For quality gates we have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.mypy-lang.org/" rel="noopener noreferrer"&gt;mypy&lt;/a&gt;: checks typing information where provided. We do not perform strict typing so that type information is required everywhere, but we validate that the provided typing is correct.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://black.readthedocs.io/en/stable/" rel="noopener noreferrer"&gt;black&lt;/a&gt;: checks formatting of the code to ensure it is according to your guidelines.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://bandit.readthedocs.io/en/latest/" rel="noopener noreferrer"&gt;bandit&lt;/a&gt;: performs basic security checks. This is a non-blocking check, meaning that the build will only fail if issues of severity &lt;code&gt;MEDIUM&lt;/code&gt; or higher a found. &lt;code&gt;LOW&lt;/code&gt; severity check fails are ignored.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally we run our testing (with &lt;a href="https://docs.pytest.org/en/7.2.x/" rel="noopener noreferrer"&gt;pytest&lt;/a&gt;). We run the testing last, as it is the most time consuming of the tasks, and it does not need to be executed if the code fails to adhere to our standards. Note that you could add any other gates here, such as a code coverage baseline that needs to be adhered to, various code analysis checks, or security scans. We only perform one more security check against dubious code and supply chain attacks during the build process. This check is however done on the final Docker image and is thus executed by the pipeline itself outside of the Docker build process.&lt;/p&gt;

&lt;p&gt;Note that all commands are executed as one &lt;code&gt;RUN&lt;/code&gt; statement. This is best practice, as none of these commands can be cached individually. Either all have to be executed again if layer it builds upon changed, or none has to run. Putting them into the same &lt;code&gt;RUN&lt;/code&gt; statement generates a single new layer for all four commands, which reduces the layer count and build overhead for Docker.&lt;/p&gt;

&lt;p&gt;Finally, note the &lt;code&gt;--mount&lt;/code&gt; options passed to &lt;code&gt;RUN&lt;/code&gt; (introduced with BuildKit 1.2). These allow to cache content within the Docker build between builds. Here we mount two caches, one for &lt;code&gt;mypy&lt;/code&gt; and one for &lt;code&gt;pytest&lt;/code&gt;. These ensure that if a subsequent Docker build is triggered for code that does not affect some files, the typing checks and tests are not run again for these files, but taken from the cache. For &lt;code&gt;pytest&lt;/code&gt; this is actually done on a "per-test" basis, ensuring tests are not run unless code they are testing is changed. Such caches can &lt;em&gt;massively&lt;/em&gt; increase the speed of your pipelines, especially when your project grows and the test suites start to take more time to run through.&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;### Runner image&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; internal.registry/base/distroless-python:3.9.2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This defines the runner image. We are done with testing and want to build the productive artefact, as all checks have passed. In a compiled setup, this would mean we would now have a release compilation stage (before building the runtime image). This is done after testing as the release binary/JAR will be compiled with optimizations, which can take quite long, and is unnecessary if the tests fail anyways. Thus in a compiled language like Java or Golang, we would now continue from the builder again, copy the code back into the layer, and compile. Here one should be careful, most languages support incremental compilation to reduce compilation times. When this is supported, one needs to mount a build cache, or the incremental compilations from previous builds will be lost every time the code changes, as the entire compilation layer will be discarded from the cache. This is done the same way as in the previous block, with &lt;code&gt;--mount&lt;/code&gt; parameters.&lt;/p&gt;

&lt;p&gt;Once the compilation is completed, and we have our final artefact (binary or JAR), we want to copy it into the runtime image. The idea is again to restrict bloat to reduce our attack surface. For instance, in a Java setup, we only need a working JRE to run our application, we no longer need Maven, the Java compiler, etc. Thus, after the build process, we use a new stage for the runtime image. This is what we did for Python here, since we have no compilation step. We use a different image than our initial &lt;code&gt;internal.registry/base/python:3.9.2-slim&lt;/code&gt; image, as we no longer need &lt;code&gt;pip&lt;/code&gt; (the Python package manager), and other bloat. Instead we use a distroless image, which is essentially a stripped down Debian image containing truly the base minimum to run Python code, but nothing to manage it, etc. Again, we use our own copy of the distroless image from our scanned registry.&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;LABEL&lt;/span&gt;&lt;span class="s"&gt; maintainer="Redacted &amp;lt;redacted-email&amp;gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This line adds metadata to the image. This is not necessary to have a good image, but useful when using images that are shared across huge organisations. This is the official maintainer label we use, where we reference our team, such that anyone that downloads the image and inspects it can see who built it, and how to get into contact with us in case of issues.&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Same as before, we copy certificates and configure Python to use our bundle. Note that this time we directly copy the bundle generated in the builder and not from the certificate image, as we need a bundle and cannot create it in this image (&lt;code&gt;update-ca-certificates&lt;/code&gt; is not contained in the distroless image). We need to copy this explicitly since we started from a fresh image. The &lt;code&gt;test&lt;/code&gt; stage had the bundle implicitly configured from the &lt;code&gt;builder&lt;/code&gt; stage, upon which it was set up.&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app/&lt;/span&gt;
&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; 1000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We set a working directory again. This is also necessary since starting from a fresh image. Also we set a non root user. This is necessary since we do not want to run our code as root for security reasons (reduce the impact of a remote code execution - RCE vulnerability). Note that any statement after the &lt;code&gt;USER&lt;/code&gt; statement will be executed in the context of that user. Therefore I would for instance not be allowed to run &lt;code&gt;update-ca-certificates&lt;/code&gt; (if it was present in the image) in a &lt;code&gt;RUN&lt;/code&gt; statement from now on, as this requires root privileges.&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder --chown=1000 /app/.venv/lib/python3.9/site-packages ./my-app&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app/my-app&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Here we copy the non-dev packages from the &lt;code&gt;builder&lt;/code&gt; stage into our productive image. Note that we use a path from within the project root (&lt;code&gt;/app/&lt;/code&gt;), since we set &lt;code&gt;pipenv&lt;/code&gt; to install the virtual environment directly in the project (the &lt;code&gt;PIPENV_VENV_IN_PROJECT&lt;/code&gt; variable). We copy the site-packages (the dependencies) directly into a subfolder, in which our application will live. This ensures that they are treated as if we wrote them ourselves, as individual Python modules in our code. They essentially become indistinguishable from our own code. This allows to keep consistency in our module names are resolved. Note we need to add the &lt;code&gt;--chown&lt;/code&gt; flag, as the dependencies were installed by the root user in the &lt;code&gt;builder&lt;/code&gt; image, and they need to be readable by our user 1000 that will run the application. The &lt;code&gt;--chown&lt;/code&gt; flag will change the files' owner (and group) to the provided argument.&lt;/p&gt;

&lt;p&gt;The second line simply sets the new working directory to be the new project directory into which we copied the code from the dependencies.&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --chown=1000 ./src/ ./&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Here we copy the source code back into the production image. We did this after copying the dependencies, such that the dependency layer can be cached again. Moreover, we only copy the source code, no tests, no assets, no Cucumber features. All these latter ones are not needed to run our application. Finally note that we copy it not from the &lt;code&gt;test&lt;/code&gt; stage, but again back from the outside build context. This is because we mock a lot during testing, changing some code behavior dynamically. Copying it back in from the outside context ensures we do copy the exact code that is in our Git repository, and not something that was accidentally modified during testing, etc.&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["python3"]&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["./main.py"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Finally we set an entrypoint and a command. The entrypoint defines what will always be executed on a Docker run (unless explicitly overwritten), and the command provides the default arguments unless overwritten via the Docker run arguments. We always use lists instead of full strings to ensure that the arguments get passed to the Kernel as system calls instead of being executed by a shell. This is important to ensure proper signal handling (when you want to terminate containers), and since there is simply no shell in the distroless image we are building.&lt;/p&gt;
&lt;h2&gt;
  
  
  That's it
&lt;/h2&gt;

&lt;p&gt;Holy molly... There is a lot that goes into building a simple Docker image. And that considering we did not even compile anything, which would require a decent amount of extra work, and that all our tooling can be managed directly via &lt;code&gt;pipenv&lt;/code&gt; and do not need to be installed separately via &lt;code&gt;curl&lt;/code&gt; or some OS package manager.&lt;/p&gt;

&lt;p&gt;So is it worth it? To put so much thought into how a simple Docker image gets built? I would argue yes. I will not start an idiomatic discussion on the benefits of smaller images, security best practices, or having tests being run directly in the Docker build. If you want such a discussion, go to Reddit or Youtube, you find plenty of beef between people fighting about these topics like their life depends on it. All I will say is this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I can run &lt;code&gt;docker build&lt;/code&gt; ... after each save on a file, since the caching is optimized to a point where a full build on a code changes takes about 1-2 seconds. Being able to run this so often gives me the confidence that what I will push will actually pass in the pipeline.&lt;/li&gt;
&lt;li&gt;Using proper caching makes me avoid having to wait 2-5 minutes each time I want to compile something. Since 2-5 minutes is typically too little for a context switch to something else, it might be time I would have just sat around thinking about how much it sucks to wait on stuff. So it has considerably improved not only my productivity, but also my mood.&lt;/li&gt;
&lt;li&gt;Docker avoids some "it works on my machine" issues. With proper version pinning and fully reproducible builds, it really nearly eradicates the issue. Now the only time something like this can happen is when running on different Docker versions.&lt;/li&gt;
&lt;li&gt;We all sometimes would like to fix tests by skipping them to "save time" when something needs to go to production quickly. Since testing is fully baked into the build process, changing flags on Jenkins/Tekton/whatever will not allow you to skip any testing or quality checks on the code. The only way would be to comment out the test code or update the &lt;code&gt;Dockerfile&lt;/code&gt;, which would not pass a PR review. This gives me immense peace of mind.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since the build process and testing is (nearly) fully defined in the &lt;code&gt;Dockerfile&lt;/code&gt; which lies in the git repository, we nearly never need to change pipelines to add/change/remove anything, as all of this can be done in the repository of the corresponding image directly. This also has downsides, as it creates duplication. I would argue that this is beneficial though, as legacy applications might not be able to switch to newer tooling as fast as greenfield projects, which want to leverage that new tooling. Having this "configured" in each repository allows each to move at its own pace. Strict guidelines (such as we don't want to use tool X anymore) can still be enforced on pipeline level via container scanning tools (which you will need either way).&lt;/p&gt;

&lt;p&gt;What's the major downside of this approach? Well I would argue there is one large one. Many people might not understand Docker well enough to figure out how the build process works, or might not have time to invest to learn how to do it correctly. This means that some people might not be able to make changes to the build processes by themselves and need might help. I think this would also be the case without a proper Docker setup, but maybe this problem is augmented by having a slightly more complex Docker build setup.&lt;/p&gt;

&lt;p&gt;I hope this has given you some food for thought. Feel free to comment any questions or remarks below, or to reach out! Do you also take your Docker builds this far?&lt;/p&gt;


&lt;div class="ltag__user ltag__user__id__2319323"&gt;
    &lt;a href="/f4z3r" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F2319323%2Ffcd09f88-ecfa-41b5-a18d-5ea0f067d475.jpeg" alt="f4z3r image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/f4z3r"&gt;Jakob Beckmann&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/f4z3r"&gt;I am passionate about cloud native software and infrastructure, security tooling, and various programming languages. I am currently a Principal Architect at ipt.ch, working on a variety of mandates.&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;




&lt;div class="ltag__user ltag__user__id__9906"&gt;
  &lt;a href="/ipt" class="ltag__user__link profile-image-link"&gt;
    &lt;div class="ltag__user__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F9906%2F5d6310d9-6d88-4bf1-a012-dd8f8b7f8ac7.png" alt="ipt image"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
      &lt;a href="/ipt" class="ltag__user__link"&gt;Innovation Process Technology AG (ipt)&lt;/a&gt;
      Follow
    &lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a href="/ipt" class="ltag__user__link"&gt;
        We are a boutique IT consultancy based in Switzerland focused on building individual solutions using leading edge technology. For more information visit our website: https://ipt.ch
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;


</description>
      <category>docker</category>
      <category>sre</category>
      <category>python</category>
      <category>security</category>
    </item>
  </channel>
</rss>
