<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Christian Dennig</title>
    <description>The latest articles on Forem by Christian Dennig (@cdennig).</description>
    <link>https://forem.com/cdennig</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/cdennig"/>
    <language>en</language>
    <item>
      <title>ASP.NET Custom Metrics with OpenTelemetry Collector &amp; Prometheus/Grafana</title>
      <dc:creator>Christian Dennig</dc:creator>
      <pubDate>Mon, 22 Aug 2022 15:01:09 +0000</pubDate>
      <link>https://forem.com/cdennig/aspnet-custom-metrics-with-opentelemetry-collector-prometheusgrafana-1hp</link>
      <guid>https://forem.com/cdennig/aspnet-custom-metrics-with-opentelemetry-collector-prometheusgrafana-1hp</guid>
      <description>&lt;p&gt;Every now and then, I am asked which tracing, logging or monitoring solution you should use in a modern application, as the possibilities are getting more and more from month to month – at least, you may get that feeling. To be as flexible as possible and to rely on open standards, a closer look at OpenTelemetry is recommended. It becomes more and more popular, because it offers a vendor-agnostic solution to work with telemetry data in your services and send them to the backend(s) of your choice (Prometheus, Jaeger, etc.). Let’s have a look at how you can use OpenTelemetry custom metrics in an ASP.NET service in combination with the probably most popular monitoring stack in the cloud native space: Prometheus / Grafana.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;You can find the demo project on &lt;a href="https://github.com/cdennig/otel-demo" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. It uses a local Kubernetes cluster (&lt;a href="https://kind.sigs.k8s.io/" rel="noopener noreferrer"&gt;kind&lt;/a&gt;) to setup the environment and deploys a demo application that generates some sample metrics. Those metrics are sent to an OTEL collector which serves as a Prometheus metrics endpoint. In the end, the metrics are scraped by Prometheus and displayed in a Grafana dashboard/chart.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2022%2F08%2Fenv.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2022%2F08%2Fenv.png%3Fw%3D1024" alt="Demo Setup"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Demo Setup&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  OpenTelemetry – What is it and why should you care?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2022%2F08%2Fotel_logo.png%3Fw%3D280" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2022%2F08%2Fotel_logo.png%3Fw%3D280" alt="Open Telemetry Logo"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;OpenTelemetry&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;OpenTelemetry (OTEL) is an open-source CNCF project that aims to provide a vendor-agnostic solution in generating, collecting and handling telemetry data of your infrastructure and services. It is able to receive, process, and export traces, logs, and metrics to different backends like &lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt;, &lt;a href="https://www.jaegertracing.io/" rel="noopener noreferrer"&gt;Jaeger&lt;/a&gt; or other commercial SaaS offering without the need for your application to have a dependency on those solutions. While OTEL itself doesn’t provide a backend or even analytics capabilities, it serves as the “central monitoring component” and knows how to send the data received to different backends by using so-called “exporters”.&lt;/p&gt;

&lt;p&gt;So why should you even care? In today’s world of distributed systems and microservices architectures where developers can release software and services faster and more independently, observability becomes one of the most important features in your environment. Visibility into systems is crucial for the success of your application as it helps you in scaling components, finding bugs and misconfigurations etc.&lt;/p&gt;

&lt;p&gt;If you haven’t decided what monitoring or tracing solution you are going to use for your next application, have a look at OpenTelemetry. It gives you the freedom to try out different monitoring solutions or even replace your preferred one later in production.&lt;/p&gt;
&lt;h2&gt;
  
  
  OpenTelemetry Components
&lt;/h2&gt;

&lt;p&gt;OpenTelemetry currently consists of several components like the &lt;a href="https://opentelemetry.io/docs/reference/specification/" rel="noopener noreferrer"&gt;cross-language specification&lt;/a&gt; (APIs/SDKs and the OpenTelemetry Protocol OTLP) for instrumentation and tools to receive, process/transform and export telemetry data. The SDKs are available in several popular languages like Java, C++, C#, Go etc. You can find the complete list of supported languages &lt;a href="https://opentelemetry.io/docs/instrumentation/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Additionally, there is a component called the “OpenTelemetry Collector” which is a vendor-agnostic proxy that receives telemetry data from different sources and can transform that data before sending it to the desired backend solution.&lt;/p&gt;

&lt;p&gt;Let’s have a closer look at the components of the collector…&lt;em&gt;receivers, processors&lt;/em&gt; and &lt;em&gt;exporters&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Receivers&lt;/em&gt; – A receiver in OpenTelemetry is the component that is responsible for getting data into a collector. It can be used in a push- or pull-based approach. It can support the OLTP protocol or even scrape a Prometheus /metrics endpoint&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Processor&lt;/em&gt; – Processors are components that let you batch-process, sample, transform and/or enrich your telemetry data that is being received by the collector before handing it over to an &lt;em&gt;exporter&lt;/em&gt;. You can add or remove attributes, like for example “personally identifiable information” (PII) or filter data based on regular expressions. A processor is an optional component in a collector pipeline.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Exporter&lt;/em&gt; – An exporter is responsible for sending data to a backend solution like Prometheus, Azure Monitor, DataDog, Splunk etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the end, it comes down to configuring the collector service with receivers, (optionally) processors and exporters to form a fully functional collector pipeline – official documentation can be found &lt;a href="https://opentelemetry.io/docs/collector/configuration/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. The configuration for the demo here is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;protocols&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;grpc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;logging&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;loglevel&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;debug&lt;/span&gt;
  &lt;span class="na"&gt;prometheus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0:8889"&lt;/span&gt;
&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;logging&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;prometheus&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The configuration consists of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one OpenTelemetry Protocol (OTLP) receiver, enabled for &lt;em&gt;http&lt;/em&gt; and &lt;em&gt;gRPC&lt;/em&gt; communication&lt;/li&gt;
&lt;li&gt;one processor that is batching the telemetry data with default values (like e.g. a &lt;code&gt;timeout&lt;/code&gt; of 200ms)&lt;/li&gt;
&lt;li&gt;two exporters piping the data to the console (&lt;code&gt;logging&lt;/code&gt;) and exposing a Prometheus &lt;code&gt;/metrics&lt;/code&gt; endpoint on &lt;code&gt;0.0.0.0:8889&lt;/code&gt; (remote-write is also possible)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ASP.NET OpenTelemetry
&lt;/h2&gt;

&lt;p&gt;To demonstrate how to send custom metrics from an ASP.NET application to Prometheus via OpenTelemetry, we first need a service that is exposing those metrics. In this demo, we simply create two custom metrics called &lt;code&gt;otel.demo.metric.gauge1&lt;/code&gt; and &lt;code&gt;otel.demo.metric.gauge2&lt;/code&gt; that will be sent to the console (&lt;code&gt;AddConsoleExporter()&lt;/code&gt;) and via the OTLP protocol to a collector service (&lt;code&gt;AddOtlpExporter()&lt;/code&gt;) that we’ll introduce later on. The application uses the ASP.NET Minimal API and the code is more or less self-explanatory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.Diagnostics.Metrics&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;OpenTelemetry.Resources&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;OpenTelemetry.Metrics&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;builder&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;WebApplication&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateBuilder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Services&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddOpenTelemetryMetrics&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;metricsProvider&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;metricsProvider&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddConsoleExporter&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddOtlpExporter&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddMeter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"otel.demo.metric"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;SetResourceBuilder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ResourceBuilder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateDefault&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"otel.demo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;serviceVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"0.0.1"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Build&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;otel_metric&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Meter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"otel.demo.metric"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"0.0.1"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;randomNum&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Random&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="c1"&gt;// Create two metrics&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;obs_gauge1&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;otel_metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CreateObservableGauge&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;(&lt;/span&gt;&lt;span class="s"&gt;"otel.demo.metric.gauge1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;randomNum&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;obs_gauge2&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;otel_metric&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CreateObservableGauge&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;double&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;(&lt;/span&gt;&lt;span class="s"&gt;"otel.demo.metric.gauge2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;randomNum&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;NextDouble&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;MapGet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/otelmetric"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s"&gt;"Hello, Otel-Metric!"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Run&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are currently dealing with custom metrics. Of course, ASP.NET also provides out-of-the-box metrics that you can utilize. Just use the ASP.NET instrumentation feature by adding &lt;code&gt;AddAspNetCoreInstrumentation()&lt;/code&gt; when configuring the metrics provider – more on that &lt;a href="https://github.com/open-telemetry/opentelemetry-dotnet-instrumentation" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Time to connect the dots. First, let’s create a Kubernetes cluster using &lt;code&gt;kind&lt;/code&gt; where we can publish the demo service, spin-up the OTEL collector instance and run a Prometheus/Grafana environment. If you want to follow along the tutorial, clone the repo from &lt;a href="https://github.com/cdennig/otel-demo" rel="noopener noreferrer"&gt;https://github.com/cdennig/otel-demo&lt;/a&gt; and switch to the &lt;code&gt;otel-demo&lt;/code&gt; directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a local Kubernetes Cluster
&lt;/h3&gt;

&lt;p&gt;To create a &lt;code&gt;kind&lt;/code&gt; cluster that is able to host a Prometheus environment, execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kind create cluster &lt;span class="nt"&gt;--name&lt;/span&gt; demo-cluster &lt;span class="se"&gt;\ &lt;/span&gt;
        &lt;span class="nt"&gt;--config&lt;/span&gt; ./kind/kind-cluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The YAML configuration file (&lt;code&gt;./kind/kind-cluster.yaml&lt;/code&gt;) adjusts some settings of the Kubernetes control plane so that Prometheus is able to scrape the endpoints of the controller services. Next, create the OpenTelemetry Collector instance.&lt;/p&gt;

&lt;h3&gt;
  
  
  OTEL Collector
&lt;/h3&gt;

&lt;p&gt;In the &lt;code&gt;manifests&lt;/code&gt; directory, you’ll find two Kubernetes manifests. One is containing the configuration for the collector (&lt;code&gt;otel-collector.yaml&lt;/code&gt;). It includes the &lt;code&gt;ConfigMap&lt;/code&gt; for the collector configuration (which will be mounted as a volume to the collector container), the deployment of the collector itself and a service exposing the ports for data ingestion (&lt;code&gt;4318&lt;/code&gt; for &lt;code&gt;http&lt;/code&gt; and &lt;code&gt;4317&lt;/code&gt; for &lt;code&gt;gRPC&lt;/code&gt;) and the metrics endpoint (&lt;code&gt;8889&lt;/code&gt;) that will be scraped later on by Prometheus. It looks as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;

&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-collector-config&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otel-collector-config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
    &lt;span class="s"&gt;receivers:&lt;/span&gt;
      &lt;span class="s"&gt;otlp:&lt;/span&gt;
        &lt;span class="s"&gt;protocols:&lt;/span&gt;
          &lt;span class="s"&gt;http:&lt;/span&gt;
          &lt;span class="s"&gt;grpc:&lt;/span&gt;
    &lt;span class="s"&gt;exporters:&lt;/span&gt;
      &lt;span class="s"&gt;logging:&lt;/span&gt;
        &lt;span class="s"&gt;loglevel: debug&lt;/span&gt;
      &lt;span class="s"&gt;prometheus:&lt;/span&gt;
        &lt;span class="s"&gt;endpoint: "0.0.0.0:8889"&lt;/span&gt;
    &lt;span class="s"&gt;processors:&lt;/span&gt;
      &lt;span class="s"&gt;batch:&lt;/span&gt;
    &lt;span class="s"&gt;service:&lt;/span&gt;
      &lt;span class="s"&gt;pipelines:&lt;/span&gt;
        &lt;span class="s"&gt;metrics:&lt;/span&gt;
          &lt;span class="s"&gt;receivers: [otlp]&lt;/span&gt;
          &lt;span class="s"&gt;processors: [batch]&lt;/span&gt;
          &lt;span class="s"&gt;exporters: [logging, prometheus]&lt;/span&gt;
&lt;span class="s"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-collector&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-collector&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-collector&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-collector&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;collector&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel/opentelemetry-collector:latest&lt;/span&gt;
          &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--config=/etc/otelconf/otel-collector-config.yaml&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-http&lt;/span&gt;
              &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4318&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-grpc&lt;/span&gt;
              &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4317&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prom-metrics&lt;/span&gt;
              &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8889&lt;/span&gt;
          &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-config&lt;/span&gt;
              &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/otelconf&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-config&lt;/span&gt;
          &lt;span class="na"&gt;configMap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-collector-config&lt;/span&gt;
            &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-collector-config&lt;/span&gt;
                &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-collector-config.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-collector&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-collector&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-http&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4318&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4318&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-grpc&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4317&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4317&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prom-metrics&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8889&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prom-metrics&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-collector&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s apply the manifest.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ./manifests/otel-collector.yaml

configmap/otel-collector-config created
deployment.apps/otel-collector created
service/otel-collector created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check that everything runs as expected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods,deployments,services,endpoints

NAME READY STATUS RESTARTS AGE
pod/otel-collector-5cd54c49b4-gdk9f 1/1 Running 0 5m13s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/otel-collector 1/1 1 1 5m13s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
service/kubernetes ClusterIP 10.96.0.1 &amp;lt;none&amp;gt; 443/TCP 22m
service/otel-collector ClusterIP 10.96.194.28 &amp;lt;none&amp;gt; 4318/TCP,4317/TCP,8889/TCP 5m13s

NAME ENDPOINTS AGE
endpoints/kubernetes 172.19.0.9:6443 22m
endpoints/otel-collector 10.244.1.2:8889,10.244.1.2:4318,10.244.1.2:4317 5m13s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that the OpenTelemetry infrastructure is in place, let’s add the workload exposing the custom metrics.&lt;/p&gt;

&lt;h3&gt;
  
  
  ASP.NET Workload
&lt;/h3&gt;

&lt;p&gt;The demo application has been containerized and published to the GitHub container registry for your convenience. So to add the workload to your cluster, simply apply the &lt;code&gt;./manifests/otel-demo-workload.yaml&lt;/code&gt; that contains the &lt;code&gt;Deployment&lt;/code&gt; manifest and adds two environment variables to configure the OTEL collector endpoint and the OTLP protocol to use – in this case &lt;code&gt;gRPC&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here’s the relevant part:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io/cdennig/otel-demo:1.0&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;otel-demo&lt;/span&gt;
    &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;OTEL_EXPORTER_OTLP_ENDPOINT&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://otel-collector.default.svc.cluster.local:4317"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;OTEL_EXPORTER_OTLP_PROTOCOL&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;grpc"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the manifest now.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ./manifests/otel-demo-workload.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remember that the application also logs to the console. Let’s query the logs of the ASP.NET service (note that the podname will differ in your environment).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl logs po/otel-workload-69cc89d456-9zfs7

info: Microsoft.Hosting.Lifetime[14]
      Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /app/
Resource associated with Metric:
    service.name: otel.demo
    service.version: 0.0.1
    service.instance.id: b84c78be-49df-42fa-bd09-0ad13481d826

Export otel.demo.metric.gauge1, Meter: otel.demo.metric/0.0.1
&lt;span class="o"&gt;(&lt;/span&gt;2022-08-20T11:40:41.4260064Z, 2022-08-20T11:40:51.3451557Z] LongGauge
Value: 10

Export otel.demo.metric.gauge2, Meter: otel.demo.metric/0.0.1
&lt;span class="o"&gt;(&lt;/span&gt;2022-08-20T11:40:41.4274763Z, 2022-08-20T11:40:51.3451863Z] DoubleGauge
Value: 0.8778815716262417

Export otel.demo.metric.gauge1, Meter: otel.demo.metric/0.0.1
&lt;span class="o"&gt;(&lt;/span&gt;2022-08-20T11:40:41.4260064Z, 2022-08-20T11:41:01.3387999Z] LongGauge
Value: 19

Export otel.demo.metric.gauge2, Meter: otel.demo.metric/0.0.1
&lt;span class="o"&gt;(&lt;/span&gt;2022-08-20T11:40:41.4274763Z, 2022-08-20T11:41:01.3388003Z] DoubleGauge
Value: 0.35409627617124295
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, let’s check if the data will be sent to the collector. Remember it exposes its &lt;code&gt;/metrics&lt;/code&gt; endpoint on &lt;code&gt;0.0.0.0:8889/metrics&lt;/code&gt;. Let’s query it by port-forwarding the service to our local machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl port-forward svc/otel-collector 8889:8889

Forwarding from 127.0.0.1:8889 -&amp;gt; 8889
Forwarding from &lt;span class="o"&gt;[&lt;/span&gt;::1]:8889 -&amp;gt; 8889

&lt;span class="c"&gt;# in a different session, curl the endpoint&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;curl http://localhost:8889/metrics

&lt;span class="c"&gt;# HELP otel_demo_metric_gauge1&lt;/span&gt;
&lt;span class="c"&gt;# TYPE otel_demo_metric_gauge1 gauge&lt;/span&gt;
otel_demo_metric_gauge1&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;instance&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"b84c78be-49df-42fa-bd09-0ad13481d826"&lt;/span&gt;,job&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"otel.demo"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; 37
&lt;span class="c"&gt;# HELP otel_demo_metric_gauge2&lt;/span&gt;
&lt;span class="c"&gt;# TYPE otel_demo_metric_gauge2 gauge&lt;/span&gt;
otel_demo_metric_gauge2&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;instance&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"b84c78be-49df-42fa-bd09-0ad13481d826"&lt;/span&gt;,job&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"otel.demo"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; 0.45433988869946285
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great, both components – the metric producer and the collector – are working as expected. Now, let’s spin up the Prometheus/Grafana environment, add the service monitor to scrape the &lt;code&gt;/metrics&lt;/code&gt; endpoint and create the Grafana dashboard for it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Add Kube-Prometheus-Stack
&lt;/h3&gt;

&lt;p&gt;Easiest way to add the Prometheus/Grafana stack to your Kubernetes cluster is to use the &lt;a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="noopener noreferrer"&gt;kube-prometheus-stack&lt;/a&gt; Helm chart. We will use a custom &lt;code&gt;values.yaml&lt;/code&gt; file to automatically add the static Prometheus target for the OTEL collector called &lt;code&gt;demo/otel-collector&lt;/code&gt; (&lt;code&gt;kubeEtc&lt;/code&gt; config is only needed in the &lt;code&gt;kind&lt;/code&gt; environment):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kubeEtcd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2381&lt;/span&gt;
&lt;span class="na"&gt;prometheus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;prometheusSpec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;additionalScrapeConfigs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;demo/otel-collector"&lt;/span&gt;
      &lt;span class="na"&gt;static_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;otel-collector.default.svc.cluster.local:8889"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, add the helm chart to your cluster by executing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; &lt;span class="nt"&gt;--wait&lt;/span&gt; &lt;span class="nt"&gt;--timeout&lt;/span&gt; 15m &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; monitoring &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--repo&lt;/span&gt; https://prometheus-community.github.io/helm-charts &lt;span class="se"&gt;\&lt;/span&gt;
  kube-prometheus-stack kube-prometheus-stack &lt;span class="nt"&gt;--values&lt;/span&gt; ./prom-grafana/values.yaml

Release &lt;span class="s2"&gt;"kube-prometheus-stack"&lt;/span&gt; does not exist. Installing it now.
NAME: kube-prometheus-stack
LAST DEPLOYED: Mon Aug 22 13:53:58 2022
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
  kubectl &lt;span class="nt"&gt;--namespace&lt;/span&gt; monitoring get pods &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s2"&gt;"release=kube-prometheus-stack"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s have a look at the Prometheus targets, if Prometheus can scrape the OTEL collector endpoint – again, port-forward the service to your local machine and open a browser at &lt;a href="http://localhost:9090/targets" rel="noopener noreferrer"&gt;http://localhost:9090/targets&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl port-forward &lt;span class="nt"&gt;-n&lt;/span&gt; monitoring svc/kube-prometheus-stack-prometheus 9090:9090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2022%2F08%2Fimage.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2022%2F08%2Fimage.png%3Fw%3D1024" alt="Prometheus targets"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Prometheus targets&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That looks as expected, now over to Grafana and create a dashboard to display the custom metrics. As done before, port-forward the Grafana service to your local machine and open a browser at &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;. Because you need a username/password combination to login to Grafana, we first need to grab that information from a Kubernetes secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Grafana admin username&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get secret &lt;span class="nt"&gt;-n&lt;/span&gt; monitoring kube-prometheus-stack-grafana &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.data.admin-user}'&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;--decode&lt;/span&gt;

&lt;span class="c"&gt;# Grafana password&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get secret &lt;span class="nt"&gt;-n&lt;/span&gt; monitoring kube-prometheus-stack-grafana &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.data.admin-password}'&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;--decode&lt;/span&gt;

&lt;span class="c"&gt;# port-forward Grafana service&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl port-forward &lt;span class="nt"&gt;-n&lt;/span&gt; monitoring svc/kube-prometheus-stack-grafana 3000:80

Forwarding from 127.0.0.1:3000 -&amp;gt; 3000
Forwarding from &lt;span class="o"&gt;[&lt;/span&gt;::1]:3000 -&amp;gt; 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After opening a browser at &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt; and a successful login, you should be greeted by the Grafana welcome page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2022%2F08%2Fimage-1.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2022%2F08%2Fimage-1.png%3Fw%3D1024" alt="Grafana Welcome Page"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Grafana Welcome Page&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Add a Dashboard for the Custom Metrics
&lt;/h3&gt;

&lt;p&gt;Head to &lt;a href="https://dev.to3000/dashboard/import"&gt;http://localhost:3000/dashboard/import&lt;/a&gt; and upload the precreated dashboard from &lt;code&gt;./prom-grafana/dashboard.json&lt;/code&gt; (or simply paste its content to the textbox). After importing the definition, you should be redirected to the dashboard and see our custom metrics being displayed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2022%2F08%2Fimage-2.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2022%2F08%2Fimage-2.png%3Fw%3D1024" alt="Add preconfigured dashboard"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Add preconfigured dashboard&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2022%2F08%2Fimage-3.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2022%2F08%2Fimage-3.png%3Fw%3D1024" alt="OTEL metrics gauge1 and gauge2"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;OTEL metrics gauge1 and gauge2&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-Up
&lt;/h2&gt;

&lt;p&gt;This demo showed how to use OpenTelemetry custom metrics in an ASP.NET service, sending telemetry data to an OTEL collector instance that is being scraped by a Prometheus instance. To close the loop, those custom metrics are eventually displayed in a Grafana dashboard. The advantage of this solution is that you use a common solution like OpenTelemetry to generate and collect metrics. To which service the data is finally sent and which solution is used to analyze the data can be easily exchanged via OTEL exporter configuration – if you don’t want to use Prometheus, you simply adapt the OTEL pipeline and export the telemetry data to e.g. Azure Monitor or DataDog, Splunk etc.&lt;/p&gt;

&lt;p&gt;I hope the demo has given you a good introduction to the world of OpenTelemetry. Happy coding! 🙂&lt;/p&gt;

</description>
      <category>otel</category>
      <category>dotnet</category>
      <category>opentelemetry</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Secure Azure Cosmos DB access by using Azure Managed Identities</title>
      <dc:creator>Christian Dennig</dc:creator>
      <pubDate>Sun, 03 Oct 2021 12:05:11 +0000</pubDate>
      <link>https://forem.com/cdennig/secure-azure-cosmos-db-access-by-using-azure-managed-identities-kh0</link>
      <guid>https://forem.com/cdennig/secure-azure-cosmos-db-access-by-using-azure-managed-identities-kh0</guid>
      <description>&lt;p&gt;&lt;em&gt;Learn how to use Azure RBAC to connect to Cosmos DB and increase the security of your application by using Azure Managed Identities.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A few months ago, the Azure Cosmos DB team released a new feature that enables developers to use Azure principals (AAD users, service principals etc.) and Azure RBAC to connect to the database service. The feature &lt;strong&gt;significantly increases the security&lt;/strong&gt; and can completely replace the use of connection strings and keys in your application – things you normally should rotate every now and then to minimize the risk of exposed credentials. Furthermore, you can apply fine-grained authorization rules to those principals at account-, database- and container-level and control what each “user” is able to do.&lt;/p&gt;

&lt;p&gt;To completely get rid of passwords and keys in your service, Microsoft encourages developers to use “Managed Identities”. Managed identities provide an identity to applications and services to be used when connection to Azure resources that support AAD authentication. There are two kinds of managed identities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;System-assigned identity&lt;/strong&gt; : created automatically by Azure at the service level (e.g. Azure AppService) and tied to the lifecycle of it. Only that specific Azure resource can then use the identity to request a token and it will be automatically removed when the service is deleted&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User-assigned identity&lt;/strong&gt; : created independently of an Azure service as a standalone resource in your subscription. The identity can be assigned to more than one resource and is not tied to the lifecycle of a particular Azure resource&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a managed identity is added, Azure also creates a dedicated certificate that is used to request a token at the token provider of Azure Active Directory. When you finally want to get access to an AAD-enabled service like Cosmos DB or Azure KeyVault, you simply call the local metadata endpoint (of the underlying service/Azure VM – &lt;a href="http://169.254.169.254/metadata/identity/oauth2/token" rel="noopener noreferrer"&gt;http://169.254.169.254/metadata/identity/oauth2/token&lt;/a&gt;) to request a token which is then used to authenticate against the service – no need to use a password as the certificate of the assigned managed identity is being used for authentication. BTW, the certificate is automatically rotated for you before it expires, so you don’t need to worry about it at all. This is one big advantage of using managed identities over service principals where you would need to deal with expiring passwords for yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sample
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;If you want to follow along with the tutorial, clone the sample repository from my personal GitHub account: &lt;a href="https://github.com/cdennig/cosmos-managed-identity" rel="noopener noreferrer"&gt;https://github.com/cdennig/cosmos-managed-identity&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To see managed identities and the Cosmos DB RBAC feature in action, we’ll first create a user-assigned identity, a database and add and assign a custom Cosmos DB role to that identity.&lt;/p&gt;

&lt;p&gt;We will use a combination of Azure Bicep and the Azure CLI. So first, let’s create a resource group and the managed identity:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;az group create &lt;span class="nt"&gt;-n&lt;/span&gt; rg-cosmosrbac &lt;span class="nt"&gt;-l&lt;/span&gt; westeurope
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"id"&lt;/span&gt;: &lt;span class="s2"&gt;"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-cosmosrbac"&lt;/span&gt;,
  &lt;span class="s2"&gt;"location"&lt;/span&gt;: &lt;span class="s2"&gt;"westeurope"&lt;/span&gt;,
  &lt;span class="s2"&gt;"managedBy"&lt;/span&gt;: null,
  &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"rg-cosmosrbac"&lt;/span&gt;,
  &lt;span class="s2"&gt;"properties"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"provisioningState"&lt;/span&gt;: &lt;span class="s2"&gt;"Succeeded"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;,
  &lt;span class="s2"&gt;"tags"&lt;/span&gt;: null,
  &lt;span class="s2"&gt;"type"&lt;/span&gt;: &lt;span class="s2"&gt;"Microsoft.Resources/resourceGroups"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;az identity create &lt;span class="nt"&gt;--name&lt;/span&gt; umid-cosmosid &lt;span class="nt"&gt;--resource-group&lt;/span&gt; rg-cosmosrbac &lt;span class="nt"&gt;--location&lt;/span&gt; westeurope
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"clientId"&lt;/span&gt;: &lt;span class="s2"&gt;"c9e48f4e-24c7-46af-834e-96cd48c2ce27"&lt;/span&gt;,
  &lt;span class="o"&gt;[&lt;/span&gt;...]
  &lt;span class="o"&gt;[&lt;/span&gt;...]
  &lt;span class="o"&gt;[&lt;/span&gt;...]
  &lt;span class="s2"&gt;"id"&lt;/span&gt;: &lt;span class="s2"&gt;"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/rg-cosmosrbac/providers/Microsoft.ManagedIdentity/userAssignedIdentities/umid-cosmosid"&lt;/span&gt;,
  &lt;span class="s2"&gt;"location"&lt;/span&gt;: &lt;span class="s2"&gt;"westeurope"&lt;/span&gt;,
  &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"umid-cosmosid"&lt;/span&gt;,
  &lt;span class="s2"&gt;"principalId"&lt;/span&gt;: &lt;span class="s2"&gt;"63cf3af1-7ee2-4d4c-9fe6-deb936065faa"&lt;/span&gt;,
  &lt;span class="s2"&gt;"resourceGroup"&lt;/span&gt;: &lt;span class="s2"&gt;"rg-cosmosrbac"&lt;/span&gt;,
  &lt;span class="s2"&gt;"tags"&lt;/span&gt;: &lt;span class="o"&gt;{}&lt;/span&gt;,
  &lt;span class="s2"&gt;"tenantId"&lt;/span&gt;: &lt;span class="s2"&gt;"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"&lt;/span&gt;,
  &lt;span class="s2"&gt;"type"&lt;/span&gt;: &lt;span class="s2"&gt;"Microsoft.ManagedIdentity/userAssignedIdentities"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, let’s see how we create the Cosmos DB account, database and the container for the data. To create these resources, we’ll use an Azure Bicep template. Let’s have a look at the different parts of it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var location = resourceGroup().location
var dbName = 'rbacsample'
var containerName = 'data'

// Cosmos DB Account
resource cosmosDbAccount 'Microsoft.DocumentDB/databaseAccounts@2021-06-15' = {
  name: 'cosmos-${uniqueString(resourceGroup().id)}'
  location: location
  kind: 'GlobalDocumentDB'
  properties: {
    consistencyPolicy: {
      defaultConsistencyLevel: 'Session'
    }
    locations: [
      {
        locationName: location
        failoverPriority: 0
      }
    ]
    capabilities: [
      {
        name: 'EnableServerless'
      }
    ]
    disableLocalAuth: false
    databaseAccountOfferType: 'Standard'
    enableAutomaticFailover: true
    publicNetworkAccess: 'Enabled'
  }
}

// Cosmos DB database
resource cosmosDbDatabase 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases@2021-06-15' = {
  name: '${cosmosDbAccount.name}/${dbName}'
  location: location
  properties: {
    resource: {
      id: dbName
    }
  }
}

// Container
resource containerData 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers@2021-06-15' = {
  name: '${cosmosDbDatabase.name}/${containerName}'
  location: location
  properties: {
    resource: {
      id: containerName
      partitionKey: {
        paths: [
          '/partitionKey'
        ]
        kind: 'Hash'
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The template for these resources is straightforward: we create a (serverless) Cosmos DB account with a database called &lt;code&gt;rbacsample&lt;/code&gt; and a &lt;code&gt;data&lt;/code&gt; container. Nothing special here.&lt;/p&gt;

&lt;p&gt;Next, the template adds a Cosmos DB role definition. Therefore, we need to inject the principal ID of the previously created managed identity to the template.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@description('Principal ID of the managed identity')
param principalId string

var roleDefId = guid('sql-role-definition-', principalId, cosmosDbAccount.id)
var roleDefName = 'Custom Read/Write role'

resource roleDefinition 'Microsoft.DocumentDB/databaseAccounts/sqlRoleDefinitions@2021-06-15' = {
  name: '${cosmosDbAccount.name}/${roleDefId}'
  properties: {
    roleName: roleDefName
    type: 'CustomRole'
    assignableScopes: [
      cosmosDbAccount.id
    ]
    permissions: [
      {
        dataActions: [
          'Microsoft.DocumentDB/databaseAccounts/readMetadata'
          'Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/*'
        ]
      }
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see in the template, that you can scope the role definition (property &lt;code&gt;assignableScopes&lt;/code&gt;), meaning at which level the role may be assigned. In this sample, it is scoped to the account level. But you can also choose “database” or “container” level (more information on that in the &lt;a href="https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-setup-rbac" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;In terms of permissions/actions, Cosmos DB offers fine-grained options that you can use when defining your custom role. You can even make use of wildcards, which we leverage in the current sample. Let’s have a look at a few of the predefined actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Microsoft.DocumentDB/databaseAccounts/readMetadata&lt;/code&gt;: read account metadata&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/create&lt;/code&gt;: create items&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/read&lt;/code&gt;: read items&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/delete&lt;/code&gt;: delete items&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/readChangeFeed&lt;/code&gt;: read the change feed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to get more information on the permission model and available actions, the &lt;a href="https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-setup-rbac#permission-model" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; goes into more detail. For now, let’s move on in our sample by assigning the custom role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var roleAssignId = guid(roleDefId, principalId, cosmosDbAccount.id)

resource roleAssignment 'Microsoft.DocumentDB/databaseAccounts/sqlRoleAssignments@2021-06-15' = {
  name: '${cosmosDbAccount.name}/${roleAssignId}'
  properties: {
    roleDefinitionId: roleDefinition.id
    principalId: principalId
    scope: cosmosDbAccount.id
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we’ll deploy the whole template (you can find the template in the root folder of the git repository):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ MI_PRINID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;az identity show &lt;span class="nt"&gt;-n&lt;/span&gt; umid-cosmosid &lt;span class="nt"&gt;-g&lt;/span&gt; rg-cosmosrbac &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"principalId"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; tsv&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;az deployment group create &lt;span class="nt"&gt;-f&lt;/span&gt; deploy.bicep &lt;span class="nt"&gt;-g&lt;/span&gt; rg-cosmosrbac &lt;span class="nt"&gt;--parameters&lt;/span&gt; &lt;span class="nv"&gt;principalId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$MI_PRINID&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; none
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the template has been successfully deployed, let’s create a sample application that uses the managed identity to access the Cosmos DB database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Application
&lt;/h3&gt;

&lt;p&gt;Let’s create a basic application that uses the managed identity to access Cosmos DB and create one document in the &lt;code&gt;data&lt;/code&gt;container for demo purposes. Fortunately, when using the Cosmos DB SDK , things couldn’t be easier for you as a developer. You simply need to use an instance of the &lt;code&gt;DefaultAzureCredential&lt;/code&gt; class from the &lt;code&gt;Azure.Identity&lt;/code&gt; package (&lt;a href="https://docs.microsoft.com/en-us/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet" rel="noopener noreferrer"&gt;API reference&lt;/a&gt;) and all the “heavy-lifting” like preparing and issuing the request to the local metadata endpoint and using the resulting token is done automatically for you. Let’s have a look at the relevant parts of the application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;credential&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;DefaultAzureCredential&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;cosmosClient&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;CosmosClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_configuration&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"Cosmos:Uri"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;credential&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;container&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cosmosClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetContainer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_configuration&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"Cosmos:Db"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;_configuration&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"Cosmos:Container"&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;

&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;newId&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Guid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;NewGuid&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;ToString&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateItemAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;newId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;partitionKey&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;newId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Ted Lasso"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;PartitionKey&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;newId&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;cancellationToken&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;stoppingToken&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Use the application in Azure
&lt;/h2&gt;

&lt;p&gt;So far, we have everything prepared to use a managed identity to connect to Azure Cosmos DB: we created a user-assigned identity, a Cosmos DB database and a custom role tied to that identity. We also have a basic application that connects to the database via Azure RBAC and creates one document in the container. Let’s now create a service that hosts and runs the application.&lt;/p&gt;

&lt;p&gt;Sure, we could use an Azure AppService or an Azure Function, but let’s exaggerate a little bit here and create an Azure Kubernetes Service cluster with the Pod Identity plugin enabled. This plugin lets you assign predefined managed identities to pods in your cluster and transparently use the local token endpoint of the underlying Azure VM (be aware that at the time of writing, this plugin is still in preview).&lt;/p&gt;

&lt;p&gt;If you haven’t used the feature yet, you first must register it in your subscription and download the Azure CLI extension for AKS preview features:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;az feature register &lt;span class="nt"&gt;--name&lt;/span&gt; EnablePodIdentityPreview &lt;span class="nt"&gt;--namespace&lt;/span&gt; Microsoft.ContainerService

&lt;span class="nv"&gt;$ &lt;/span&gt;az extension add &lt;span class="nt"&gt;--name&lt;/span&gt; aks-preview
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The feature registration takes some time. You can check the status via the following command. The registration state should be “Registered” before continuing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;az feature show &lt;span class="nt"&gt;--name&lt;/span&gt; EnablePodIdentityPreview &lt;span class="nt"&gt;--namespace&lt;/span&gt; Microsoft.ContainerService &lt;span class="nt"&gt;-o&lt;/span&gt; table
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, create the Kubernetes cluster. To keep things simple and cheap, let’s just add one node to the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;az aks create &lt;span class="nt"&gt;-g&lt;/span&gt; rg-cosmosrbac &lt;span class="nt"&gt;-n&lt;/span&gt; cosmosrbac &lt;span class="nt"&gt;--enable-pod-identity&lt;/span&gt; &lt;span class="nt"&gt;--network-plugin&lt;/span&gt; azure &lt;span class="nt"&gt;-c&lt;/span&gt; 1 &lt;span class="nt"&gt;--generate-ssh-keys&lt;/span&gt;

&lt;span class="c"&gt;# after the cluster has been created, download the credentials&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;az aks get-credentials &lt;span class="nt"&gt;-g&lt;/span&gt; rg-cosmosrbac &lt;span class="nt"&gt;-n&lt;/span&gt; cosmosrbac
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, to use the managed identity and bind it to the cluster VM(s), we need to assign a special role to our managed identity called &lt;code&gt;Virtual Machine Contributor&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ MI_APPID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;az identity show &lt;span class="nt"&gt;-n&lt;/span&gt; umid-cosmosid &lt;span class="nt"&gt;-g&lt;/span&gt; rg-cosmosrbac &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"clientId"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; tsv&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;$ NODE_GROUP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;az aks show &lt;span class="nt"&gt;-g&lt;/span&gt; rg-cosmosrbac &lt;span class="nt"&gt;-n&lt;/span&gt; cosmosrbac &lt;span class="nt"&gt;--query&lt;/span&gt; nodeResourceGroup &lt;span class="nt"&gt;-o&lt;/span&gt; tsv&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;$ NODES_RESOURCE_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;az group show &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="nv"&gt;$NODE_GROUP&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; tsv &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"id"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# assign the role&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;az role assignment create &lt;span class="nt"&gt;--role&lt;/span&gt; &lt;span class="s2"&gt;"Virtual Machine Contributor"&lt;/span&gt; &lt;span class="nt"&gt;--assignee&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$MI_APPID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--scope&lt;/span&gt; &lt;span class="nv"&gt;$NODES_RESOURCE_ID&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we are able to create a Pod Identity and assign our managed identity to it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ MI_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;az identity show &lt;span class="nt"&gt;-n&lt;/span&gt; umid-cosmosid &lt;span class="nt"&gt;-g&lt;/span&gt; rg-cosmosrbac &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"id"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; tsv&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;az aks pod-identity add &lt;span class="nt"&gt;--resource-group&lt;/span&gt; rg-cosmosrbac &lt;span class="nt"&gt;--cluster-name&lt;/span&gt; cosmosrbac &lt;span class="nt"&gt;--namespace&lt;/span&gt; default &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="s2"&gt;"cosmos-pod-identity"&lt;/span&gt; &lt;span class="nt"&gt;--identity-resource-id&lt;/span&gt; &lt;span class="nv"&gt;$MI_ID&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s look at our cluster and see what has been created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get azureidentities.aadpodidentity.k8s.io
NAME                AGE
cosmos-pod-identity 2m34s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F10%2Fpodidentity.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F10%2Fpodidentity.png%3Fw%3D1024" alt="Details of the pod identity"&gt;&lt;/a&gt;Details of the pod identity&lt;/p&gt;

&lt;p&gt;We now have the pod identity available in the cluster and are ready to deploy the sample application. If you want to create the Docker image on your own, you can use the &lt;code&gt;Dockerfile&lt;/code&gt; located in the &lt;code&gt;CosmosDemoRbac\CosmosDemoRbac&lt;/code&gt; folder of the repository. For your convenience, there’s already a pre-built / pre-published image on Docker Hub. We can now create the pod manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# contents of pod.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;aadpodidbinding&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cosmos-pod-identity"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;chrisdennig/cosmos-mi:1.1&lt;/span&gt;
    &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cosmos__Uri&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://cosmos-2varraanuegcs.documents.azure.com:443/"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cosmos__Db&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rbacsample"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cosmos__Container&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data"&lt;/span&gt;
  &lt;span class="na"&gt;nodeSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes.io/os&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;linux&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The most important part here is line 7 (&lt;code&gt;aadpodidbinding: "cosmos-pod-identity"&lt;/code&gt;) where we bind the previously created pod identity to our pod. This will enable the application container within the pod to call the local metadata endpoint on the cluster VM to request an access token that is finally used to authenticate against Cosmos DB. Let’s deploy it (you’ll find &lt;code&gt;pod.yaml&lt;/code&gt; file in the root folder).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the pod starts, the container will log some information to &lt;code&gt;stdout&lt;/code&gt;, indicating whether the connection to Cosmos DB was successful. So, let’s query the logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl logs demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see an output like that:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F10%2Fpod_output.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F10%2Fpod_output.png%3Fw%3D1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Seems like everything works as expected. Open your database in the Azure Portal and have a look at the &lt;code&gt;data&lt;/code&gt; container. You should see one entry:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F10%2Fsample_data.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F10%2Fsample_data.png%3Fw%3D1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus: No more Primary Keys!
&lt;/h2&gt;

&lt;p&gt;So far, we can connect to Cosmos DB via a user-assigned Azure Managed Identity. To increase the security of such a scenario even more, you can now &lt;strong&gt;disable the use of connection strings and Cosmos DB account keys&lt;/strong&gt; at all – preventing anyone from using this kind of information to access your account.&lt;/p&gt;

&lt;p&gt;To do this, open the &lt;code&gt;deploy.bicep&lt;/code&gt; file and set the parameter &lt;code&gt;disableLocalAuth&lt;/code&gt; on line 31 to &lt;code&gt;true&lt;/code&gt; (we do this now, because it – of course – also disables the access to the database via the Portal). Now, reapply the template and try to open the Data Explorer in the Azure Portal again. You’ll see that you can’t access your data anymore, because the portal uses the Cosmos DB connection strings / account keys under the hood.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F10%2Flocalauthdisabled.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F10%2Flocalauthdisabled.png%3Fw%3D1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Getting rid of passwords (or connection string / keys) while accessing Azure services and instead making use of Managed Identities is a fantastic way to increase the security of your workloads running in Azure. This sample demonstrated how to use this approach in combination with Cosmos DB and Azure Kubernetes Service (Pod Identity feature). It is – of course – not limited to these resources. You can use this technique with other Azure services like Azure KeyVault, Azure Storage Accounts, Azure SQL DBs etc. Microsoft even encourages you to apply this approach to eliminate the probably most used attack vector in cloud computing: exposed passwords (e.g. via misconfiguration, leaked credentials in code repositories etc.).&lt;/p&gt;

&lt;p&gt;I hope this tutorial could show you, how easy it is to apply this approach and inspire you to use it in your next project. Happy hacking, friends! 🖖&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;No more connection strings or primary keys! Increase security when accessing @AzureCosmosDB by using #AAD Managed Identities.  &lt;/p&gt;
&lt;h1&gt;
  
  
  cosmosdb #aks #k8s #kubernetes #podidentity @Azure
&lt;/h1&gt;

&lt;p&gt;&lt;a href="http://twitter.com/share?&amp;amp;text=No%20more%20connection%20strings%20or%20primary%20keys!%20Increase%20security%20when%20accessing%20%40AzureCosmosDB%20by%20using%20%23AAD%20Managed%20Identities.%3Cbr%3E%3Cbr%3E%23cosmosdb%20%23aks%20%23k8s%20%23kubernetes%20%23podidentity%20%40Azure%20&amp;amp;url=https://partlycloudy.blog/2021/10/03/secure-azure-cosmos-db-access-by-using-azure-managed-identities/&amp;amp;via=chrisdennig" rel="noopener noreferrer"&gt;Tweet&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can find the source code on my private GitHub account: &lt;a href="https://github.com/cdennig/cosmos-managed-identity" rel="noopener noreferrer"&gt;https://github.com/cdennig/cosmos-managed-identity&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Header image source: &lt;a href="https://unsplash.com/@markuswinkler" rel="noopener noreferrer"&gt;https://unsplash.com/@markuswinkler&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>kubernetes</category>
      <category>aad</category>
      <category>cosmosdb</category>
    </item>
    <item>
      <title>Using the Transactional Outbox Pattern with Azure Cosmos DB for Guaranteed Delivery of Domain Events</title>
      <dc:creator>Christian Dennig</dc:creator>
      <pubDate>Wed, 22 Sep 2021 10:08:55 +0000</pubDate>
      <link>https://forem.com/cdennig/using-the-transactional-outbox-pattern-with-azure-cosmos-db-for-guaranteed-delivery-of-domain-events-5df2</link>
      <guid>https://forem.com/cdennig/using-the-transactional-outbox-pattern-with-azure-cosmos-db-for-guaranteed-delivery-of-domain-events-5df2</guid>
      <description>&lt;p&gt;&lt;em&gt;This article is about how to build a resilient architecture for distributed applications leveraging basic Domain-Driven Design and the Transactional Outbox Pattern with Azure Cosmos DB and Azure Service Bus.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;(&lt;strong&gt;STOP!&lt;/strong&gt; This article has been migrated to (and will be kept up-to-date in) the Azure Architecture Center: &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/best-practices/transactional-outbox-cosmos" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/architecture/best-practices/transactional-outbox-cosmos&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Microservice architectures become increasingly popular these days and promise to solve problems like scalability, maintainability, agility etc. by splitting your once monolithic application into smaller (micro-) services that can operate and scale on their own. While you want these services to be independent from other services of your application (no tight coupling!), this also means managing data necessary to operate independently within a dedicated datastore per service. In such a distributed, microservice-oriented application, you will end up running multiple datastores where data is often replicated between different services of your application.&lt;/p&gt;

&lt;p&gt;How does such a service get hold of all the data it needs to run properly? Typically, you use a messaging solution like RabbitMQ, Kafka or Azure Service Bus that distributes events from your microservices via a messaging bus after a business object has been created, modified, deleted etc. By following such an approach, you avoid calling and therefor tightly coupling your services together.&lt;/p&gt;




&lt;p&gt;Let’s have a look at the “Hello, World!” example in that space: in an &lt;em&gt;Ordering&lt;/em&gt; service, when a user wants to create a new order, the service would receive the payload from a client application via a REST endpoint, it would map the data to the internal representation of an &lt;em&gt;Order&lt;/em&gt; object (validate the data) and after a successful commit to the database, it would publish an &lt;em&gt;OrderCreated&lt;/em&gt; event to a message bus. Any other service interested in newly created orders (e.g. an &lt;em&gt;Inventory&lt;/em&gt; or &lt;em&gt;Invoicing&lt;/em&gt; service), would subscribe to these &lt;em&gt;OrderCreated&lt;/em&gt; messages and process them accordingly. The following pseudo code shows, how this would typically look like in the &lt;em&gt;Ordering&lt;/em&gt; service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;
&lt;span class="nf"&gt;CreateNewOrder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CreateOrderDto&lt;/span&gt; &lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
  &lt;span class="c1"&gt;// validate the incoming data&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;
  &lt;span class="c1"&gt;// do some other stuff&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;
  &lt;span class="c1"&gt;// Save the object to the database&lt;/span&gt;
  &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_orderRespository&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="c1"&gt;// finally, publish the respective event&lt;/span&gt;
  &lt;span class="n"&gt;_messagingService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Publish&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;OrderCreatedEvent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That all sounds wonderful until something unexpected happens between the highlighted lines of the implementation. Imagine you successfully saved the &lt;em&gt;Order&lt;/em&gt; object to the database, are in the middle of publishing the event to the message bus and your service crashes, the message bus is not available (due to whatever reason) or the network between your service and the messaging system has a problem?!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F08%2Fimage-1.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F08%2Fimage-1.png%3Fw%3D1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In a nutshell: you cannot publish the &lt;em&gt;OrderCreated&lt;/em&gt; event to “the outside world” although the actual object has already been saved. Now what? You end up having a severe problem and start saving the events to a secondary service like an Azure Storage Account to process them later or – even worse – writing code that keeps track of all the events that haven’t already been published in memory. Worst case, you end up having data inconsistencies in your other services, because events are lost. Debugging these kinds of problems is pure nightmare and you want to avoid these situations whatever it takes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;So, what is the solution for the problem described above? There is a well-known pattern called &lt;strong&gt;Transactional Outbox Pattern&lt;/strong&gt; that will help us here. The pattern ensures that events will be saved in a datastore (typically in an &lt;em&gt;Outbox&lt;/em&gt; table of your database) before ultimately pushing them to the message broker. If the business object and the corresponding events are saved &lt;strong&gt;within the same database transaction&lt;/strong&gt; , it is also guaranteed that no data will be lost – either everything will be committed or rolled back. To eventually publish the event, a different service or worker process can query the &lt;em&gt;Outbox&lt;/em&gt; table for new entries, publish the events and mark them as “processed”. Using this pattern, you ensure that events will never be lost after creating or modifying a business object.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F09%2Fimage.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F09%2Fimage.png%3Fw%3D1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In a traditional database, the implementation in your service is fairly easy. If you e.g. use Entity Framework Core, you will use your EF context to create a database transaction, save the business object and the event and commit the transaction – or do a rollback in case of an error. Even the worker service that is processing the events data is straightforward: periodically query the &lt;em&gt;Outbox&lt;/em&gt; table for new entries, publish the event payload to the message bus and mark the entry as “processed”.&lt;/p&gt;

&lt;p&gt;Of course, in real-life things are not as easy as they might look like in the first place. Most importantly, you need to make sure that the order of the events, as they happened in the application, is preserved so that an &lt;em&gt;OrderUpdated&lt;/em&gt; event doesn’t get published before an &lt;em&gt;OrderCreated&lt;/em&gt; event.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to implement the pattern with Azure Cosmos DB?
&lt;/h2&gt;

&lt;p&gt;So far, we discussed the &lt;strong&gt;Transactional Outbox Pattern&lt;/strong&gt; in theory and how it can help to implement reliable messaging in distributed applications. But how can we use the pattern in combination with Azure Cosmos DB and leverage some of the unique features like Cosmos DB Change Feed to make our developer life easier? Let’s have a detailed look at it by implementing a sample service that is responsible for managing &lt;em&gt;Contact&lt;/em&gt; objects (Firstname, Lastname, Email, Company Information etc.). The sample service uses the CQRS pattern and follows basic Domain-Driven Design concepts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Azure Cosmos DB
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F09%2Fcosmosdb-logo-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F09%2Fcosmosdb-logo-1.png" alt="Cosmos DB Logo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, Azure Cosmos DB – in case you haven’t heard of the service so far – is a globally distributed NoSQL database service that lets you use different APIs like Gremlin, MongoDB and the Core SQL API (which we will focus on in this article) to manage your application data – with 99.999% SLAs on availability. Cosmos DB can automatically horizontally scale your database, promising unlimited storage and throughput. The service keeps its promise by partitioning your data based on a user-defined key provided at container (the place where your documents will be stored) creation, the so-called “partition key”. That means your data is divided into logical subsets within a container based on that partition key – you may have also heard of the term “sharding” in case you are familiar with MongoDB. Under the hood, Cosmos DB will then distribute these logical subsets of your data across (multiple) physical partitions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cosmos DB Transactional Batches
&lt;/h3&gt;

&lt;p&gt;Due to the mechanism described above, transactions in Cosmos DB work slightly different than you might be familiar with in relational database systems. Cosmos DB transactions – called “transactional batches” – operate on a single logical partition and therefor guarantee ACID properties. That means, you can’t save two documents in a transactional batch operation in different containers or even logical partitions (read: with different “partition keys”). Due to that mechanism, the implementation of the &lt;strong&gt;Transactional Outbox Pattern&lt;/strong&gt; with Cosmos DB is slightly different compared to the approach in a relational database where you would simply have two tables: one for the business object and one &lt;em&gt;Outbox&lt;/em&gt; table for the events. In Cosmos DB, we need to save the business object and the events data in the same logical partition (in the same container). The following chapters describe, how this can be achieved.&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom Container Context, Repositories and UnitOfWork
&lt;/h3&gt;

&lt;p&gt;The most important part of the implementation is a “custom container context” that is responsible of keeping track of objects that need to be saved in the same transactional batch. Think of a very lightweight “Entity Framework Context” maintaining a list of created/modified objects that also knows in which Cosmos DB container (and logical partition) these objects need to be saved. Here’s the interface for it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;IContainerContext&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;Container&lt;/span&gt; &lt;span class="n"&gt;Container&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IDataObject&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Entity&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;DataObjects&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;IDataObject&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Entity&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;entity&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IDataObject&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Entity&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; 
      &lt;span class="nf"&gt;SaveChangesAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CancellationToken&lt;/span&gt; &lt;span class="n"&gt;cancellationToken&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Reset&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The list within the “container context” component will hold &lt;em&gt;Contact&lt;/em&gt; as well as &lt;em&gt;DomainEvent&lt;/em&gt; objects and both will be put in the same container – yes, we are mixing multiple types of objects in the same Cosmos DB container and use a &lt;em&gt;Type&lt;/em&gt; property to distinct between an “entity” and an “event”.&lt;/p&gt;

&lt;p&gt;For each type there exists a dedicated repository that defines/implements the data access. The &lt;em&gt;Contact&lt;/em&gt; repository interface offers the following methods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;IContactsRepository&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Contact&lt;/span&gt; &lt;span class="n"&gt;contact&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;(&lt;/span&gt;&lt;span class="n"&gt;Contact&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;ReadAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Guid&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;etag&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt; &lt;span class="nf"&gt;DeleteAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Guid&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;etag&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;(&lt;/span&gt;&lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;(&lt;/span&gt;&lt;span class="n"&gt;Contact&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&amp;gt;,&lt;/span&gt; &lt;span class="kt"&gt;bool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&amp;gt;&lt;/span&gt; 
      &lt;span class="nf"&gt;ReadAllAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;pageSize&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;continuationToken&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Contact&lt;/span&gt; &lt;span class="n"&gt;contact&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;etag&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;Event&lt;/em&gt; repository looks similar, except that there is only one method to create new events in the store:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;IEventRepository&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ContactDomainEvent&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The implementations of both repository interfaces get a reference via dependency injection to an &lt;em&gt;IContainerContext&lt;/em&gt; instance to make sure that both &lt;strong&gt;operate on the same context&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The final component is a &lt;em&gt;UnitOfWork&lt;/em&gt; that is responsible for committing the changes held in the &lt;em&gt;IContainerContext&lt;/em&gt; instance to the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;UnitOfWork&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;IUnitOfWork&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="n"&gt;IContainerContext&lt;/span&gt; &lt;span class="n"&gt;_context&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;IContactRepository&lt;/span&gt; &lt;span class="n"&gt;ContactsRepo&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;UnitOfWork&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;IContainerContext&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;IContactRepository&lt;/span&gt; &lt;span class="n"&gt;cRepo&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;_context&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="n"&gt;ContactsRepo&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cRepo&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IDataObject&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Entity&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; 
        &lt;span class="nf"&gt;CommitAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CancellationToken&lt;/span&gt; &lt;span class="n"&gt;cancellationToken&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;_context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;SaveChangesAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cancellationToken&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
csharp&lt;/p&gt;

&lt;p&gt;We now have the components in place for data access. Let’s see, how events are created and published.&lt;/p&gt;
&lt;h3&gt;
  
  
  Event Handling – Creation and Publication
&lt;/h3&gt;

&lt;p&gt;Every time a &lt;em&gt;Contact&lt;/em&gt; object is created, modified or (soft-) deleted, we want the system to raise a corresponding event, so that we can notify other services interested in these changes immediately. Core of the solution provided in this article is a combination of Domain-Driven Design and making use of the mediator pattern as proposed by Jimmy Bogard [&lt;a href="https://lostechies.com/jimmybogard/2014/05/13/a-better-domain-events-pattern/" rel="noopener noreferrer"&gt;1&lt;/a&gt;]. He suggests maintaining a list of domain events that happened due to modifications of the domain object and “publish” these events ultimately &lt;strong&gt;before saving the actual object to the database&lt;/strong&gt;. The list of changes is kept in the domain object itself, so that no other component can modify the chain of events. The behavior of maintaining such events (&lt;em&gt;IEvent&lt;/em&gt; instances) in a domain object is defined via an interface &lt;em&gt;IEventEmitter&lt;/em&gt; and implemented in an abstract &lt;em&gt;DomainEntity&lt;/em&gt; class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;abstract&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;DomainEntity&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Entity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;IEventEmitter&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IEvent&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IEvent&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;_events&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;JsonIgnore&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;IReadOnlyList&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IEvent&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;DomainEvents&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; 
        &lt;span class="n"&gt;_events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AsReadOnly&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;virtual&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;AddEvent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;IEvent&lt;/span&gt; &lt;span class="n"&gt;domainEvent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;FindIndex&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Action&lt;/span&gt; &lt;span class="p"&gt;==&lt;/span&gt; &lt;span class="n"&gt;domainEvent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Action&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;_events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;domainEvent&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;else&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;_events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;RemoveAt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="n"&gt;_events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Insert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;domainEvent&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When it comes to raising/adding domain events, it’s the responsibility of the &lt;em&gt;Contact&lt;/em&gt; object. As mentioned before, the &lt;em&gt;Contact&lt;/em&gt; entity follows basic DDD concepts. In this case, it means that you can’t modify any domain properties “from outside”: you don’t have any public setters in the class. Instead, it offers dedicated methods to manipulate the state and is therefore able to raise the appropriate events for a certain modification (e.g. “NameUpdated”, “EmailUpdated” etc.).&lt;/p&gt;

&lt;p&gt;Here’s an example when updating the name of a contact – the event is raised at the highlighted row:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;SetName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;firstName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;lastName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;IsNullOrWhiteSpace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;firstName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;||&lt;/span&gt; 
        &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;IsNullOrWhiteSpace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lastName&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;throw&lt;/span&gt; 
            &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ArgumentException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"FirstName or LastName may not be empty"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Name&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;firstName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;lastName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;IsNew&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="nf"&gt;AddEvent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ContactNameUpdatedEvent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="n"&gt;ModifiedAt&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;DateTimeOffset&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Now&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The corresponding &lt;em&gt;ContactNameUpdatedEvent&lt;/em&gt; that simply keeps track of the individual changes to the domain object looks as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ContactNameUpdatedEvent&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ContactDomainEvent&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;Name&lt;/span&gt; &lt;span class="n"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;ContactNameUpdatedEvent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Guid&lt;/span&gt; &lt;span class="n"&gt;contactId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Name&lt;/span&gt; &lt;span class="n"&gt;contactName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; 
        &lt;span class="k"&gt;base&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Guid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;NewGuid&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;contactId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;nameof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ContactNameUpdatedEvent&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;contactName&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So far, we only “log” the events to the domain object and nothing gets published or even saved to the database. How can we persist the events in combination with the business object? Well, it’s done right before saving the domain object to Cosmos DB.&lt;/p&gt;

&lt;p&gt;Within the &lt;em&gt;SaveChangesAsync&lt;/em&gt; method of the &lt;em&gt;IContainerContext&lt;/em&gt; implementation, we simply loop over all objects tracked in the container context and publish the events maintained in these objects. It’s all done in a private &lt;em&gt;RaiseDomainEvents&lt;/em&gt; method (&lt;em&gt;dObjs&lt;/em&gt; is the list of tracked entities of the container context):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;
&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;RaiseDomainEvents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IDataObject&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Entity&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;dObjs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;eventEmitters&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IEventEmitter&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IEvent&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;gt;();&lt;/span&gt;

    &lt;span class="c1"&gt;// Get all EventEmitters&lt;/span&gt;
    &lt;span class="k"&gt;foreach&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;dObjs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Data&lt;/span&gt; &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="n"&gt;IEventEmitter&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IEvent&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;ee&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;eventEmitters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ee&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Raise Events&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;eventEmitters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Count&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;foreach&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;evt&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;eventEmitters&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;SelectMany&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;eventEmitter&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;eventEmitter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DomainEvents&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="n"&gt;_mediator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Publish&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;evt&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On line 14 (highlighted), we use the &lt;em&gt;MediatR&lt;/em&gt; library to publish an event within our application – this is possible, because all events – like &lt;em&gt;ContactNameUpdatedEvent&lt;/em&gt; – also implement the &lt;em&gt;INotification&lt;/em&gt; interface of the &lt;em&gt;MediatR&lt;/em&gt; package.&lt;/p&gt;

&lt;p&gt;Of course, we also need to handle these events – and this is where the &lt;em&gt;IEventsRepository&lt;/em&gt; implementation comes into play. Let’s have a look at such an event handler:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ContactNameUpdatedHandler&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;INotificationHandler&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ContactNameUpdatedEvent&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="n"&gt;IEventRepository&lt;/span&gt; &lt;span class="n"&gt;EventRepository&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;ContactNameUpdatedHandler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;IEventRepository&lt;/span&gt; &lt;span class="n"&gt;eventRepo&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;EventRepository&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;eventRepo&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt; &lt;span class="nf"&gt;Handle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ContactNameUpdatedEvent&lt;/span&gt; &lt;span class="n"&gt;notification&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
        &lt;span class="n"&gt;CancellationToken&lt;/span&gt; &lt;span class="n"&gt;cancellationToken&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;EventRepository&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;notification&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CompletedTask&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see, that an &lt;em&gt;IEventRepository&lt;/em&gt; instance gets injected in the handler class via the constructor. So, as soon as an &lt;em&gt;ContactNameUpdatedEvent&lt;/em&gt; gets published in the application, the &lt;em&gt;Handle&lt;/em&gt; method gets invoked and uses the events repository instance to create a notification object – that ultimately lands in the list of tracked objects in the &lt;em&gt;IContainerContext&lt;/em&gt; object and therefore &lt;strong&gt;will become part of the objects that will be saved in the same transactional batch&lt;/strong&gt; to Cosmos DB.&lt;/p&gt;

&lt;p&gt;Here are the important parts of the implementation of &lt;em&gt;IContainerContext&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;
&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IDataObject&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Entity&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; 
    &lt;span class="nf"&gt;SaveInTransactionalBatchAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IDataObject&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Entity&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;dObjs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;CancellationToken&lt;/span&gt; &lt;span class="n"&gt;cancellationToken&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dObjs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Count&lt;/span&gt; &lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;pk&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;PartitionKey&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dObjs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;PartitionKey&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;tb&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateTransactionalBatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pk&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="n"&gt;dObjs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ForEach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;TransactionalBatchItemRequestOptions&lt;/span&gt; &lt;span class="n"&gt;tro&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(!&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;IsNullOrWhiteSpace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Etag&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
                &lt;span class="n"&gt;tro&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;TransactionalBatchItemRequestOptions&lt;/span&gt; 
                &lt;span class="p"&gt;{&lt;/span&gt; 
                    &lt;span class="n"&gt;IfMatchEtag&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Etag&lt;/span&gt; 
                &lt;span class="p"&gt;};&lt;/span&gt;

            &lt;span class="k"&gt;switch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;State&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="n"&gt;EntityState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Created&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                    &lt;span class="n"&gt;tb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateItem&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                    &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="n"&gt;EntityState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Updated&lt;/span&gt; &lt;span class="k"&gt;or&lt;/span&gt; &lt;span class="n"&gt;EntityState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Deleted&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                    &lt;span class="n"&gt;tb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ReplaceItem&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;o&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tro&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                    &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;tbResult&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;tb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ExecuteAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cancellationToken&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;check&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;codes&lt;/span&gt; &lt;span class="n"&gt;etc&lt;/span&gt;&lt;span class="p"&gt;.]&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IDataObject&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Entity&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;gt;(&lt;/span&gt;&lt;span class="n"&gt;dObjs&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="c1"&gt;// reset internal list&lt;/span&gt;
    &lt;span class="n"&gt;DataObjects&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Clear&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Let Cosmos DB Shine
&lt;/h3&gt;

&lt;p&gt;We now have everything in place: any time a domain object gets created or modified, it adds a corresponding domain event, which will be put in the same container context (via a separate repository) for object tracking and finally saved – both the domain object and the events – in the same transactional batch to Cosmos DB.&lt;/p&gt;

&lt;p&gt;This is how the process works so far (e.g. for updating the name on a contact object):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;em&gt;SetName&lt;/em&gt; is invoked on the contact object&lt;/li&gt;
&lt;li&gt;Event &lt;em&gt;ContactNameUpdated&lt;/em&gt; is added to the list of events in the &lt;strong&gt;domain object&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;ContactRepository &lt;em&gt;Update&lt;/em&gt; method is invoked which adds the domain object to the container context. &lt;strong&gt;Object is now “tracked”&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;CommitAsync&lt;/em&gt; is invoked on the UnitOfWork object which in turn calls &lt;em&gt;SaveChangesAsync&lt;/em&gt; on the container context&lt;/li&gt;
&lt;li&gt;Within &lt;em&gt;SaveChangesAsync&lt;/em&gt;, all events in the list of the domain object get published by a &lt;em&gt;MediatR&lt;/em&gt; instance and are added via the EventsRepository to the &lt;strong&gt;same container context&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;After that, in &lt;em&gt;SaveChangesAsync&lt;/em&gt;, a &lt;em&gt;TransactionalBatch&lt;/em&gt; is created which will hold both the contact object and the event&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;TransactionalBatch&lt;/em&gt; is executed and the data is committed to Cosmos DB&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;SaveChangesAsync&lt;/em&gt; and &lt;em&gt;CommitAsync&lt;/em&gt; successfully return&lt;/li&gt;
&lt;li&gt;End of the update process&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;Let’s have a closer look now at how each type gets persisted to a container. In the code snippets above, you saw that objects saved to Cosmos DB will be wrapped in a &lt;em&gt;DataObject&lt;/em&gt; instance. Such an object provides common properties like &lt;em&gt;Id&lt;/em&gt;, &lt;em&gt;PartitionKey&lt;/em&gt;, &lt;em&gt;Type&lt;/em&gt;, &lt;em&gt;State&lt;/em&gt; (like e.g. &lt;em&gt;Created&lt;/em&gt;, &lt;em&gt;Updated&lt;/em&gt; etc. – won’t be persisted in Cosmos DB), &lt;em&gt;Etag&lt;/em&gt; (for optimistic locking [&lt;a href="https://docs.microsoft.com/en-us/azure/cosmos-db/sql/database-transactions-optimistic-concurrency#optimistic-concurrency-control" rel="noopener noreferrer"&gt;2&lt;/a&gt;]), &lt;em&gt;TTL&lt;/em&gt; (Time-To-Live property for automatic cleanup of old documents [&lt;a href="https://docs.microsoft.com/en-us/azure/cosmos-db/sql/time-to-live" rel="noopener noreferrer"&gt;3&lt;/a&gt;]) and – of course – the &lt;em&gt;Data&lt;/em&gt; itself. All of this is defined in a generic interface called &lt;em&gt;IDataObject&lt;/em&gt; and used by the repositories and the container context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;IDataObject&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;out&lt;/span&gt; &lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;where&lt;/span&gt; &lt;span class="n"&gt;T&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Entity&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;Id&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;PartitionKey&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;Type&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;T&lt;/span&gt; &lt;span class="n"&gt;Data&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;Etag&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;Ttl&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;EntityState&lt;/span&gt; &lt;span class="n"&gt;State&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Objects wrapped in a &lt;em&gt;DataObject&lt;/em&gt; instance and saved to the database will then look like this (&lt;em&gt;Contact&lt;/em&gt; and &lt;em&gt;ContactNameUpdatedEvent&lt;/em&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Contact&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;document/object&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;after&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;creation&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"b5e2e7aa-4982-4735-9422-c39a7c4af5c2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"partitionKey"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"b5e2e7aa-4982-4735-9422-c39a7c4af5c2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"contact"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"firstName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Bertram"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"lastName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Gilfoyle"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"This is a contact"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bg@piedpiper.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"company"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"companyName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Pied Piper"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"street"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Street"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"houseNumber"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1a"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"postalCode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"092821"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"city"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Palo Alto"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"country"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"US"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"createdAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2021-09-22T11:07:37.3022907+02:00"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"deleted"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"b5e2e7aa-4982-4735-9422-c39a7c4af5c2"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ttl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;-1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"_etag"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;180014cc-0000-1500-0000-614455330000&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"_ts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1631868211&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;after&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;setting&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;new&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;this&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;is&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;how&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;an&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;event&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;document&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;looks&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;like&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"d6a5f4b2-84c3-4ac7-ae22-6f4025ba9ca0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"partitionKey"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"b5e2e7aa-4982-4735-9422-c39a7c4af5c2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"domainEvent"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"firstName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Dinesh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"lastName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Chugtai"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"contactId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"b5e2e7aa-4982-4735-9422-c39a7c4af5c2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ContactNameUpdatedEvent"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"d6a5f4b2-84c3-4ac7-ae22-6f4025ba9ca0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"createdAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2021-09-17T10:50:01.1692448+02:00"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ttl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;120&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"_etag"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;18005bce-0000-1500-0000-614456b80000&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"_ts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1631868600&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You see that the &lt;em&gt;Contact&lt;/em&gt; and &lt;em&gt;ContactNameUpdatedEvent&lt;/em&gt; (type: &lt;em&gt;domainEvent&lt;/em&gt;) documents have the same partition key – hence both documents will be persisted in the same logical partition. And, if you want to keep a log of changes of a contact object, you can also keep all the events in the container for good. Frankly, this is not what we want to have here in this sample. The purpose of persisting both object types is to make sure that &lt;strong&gt;a)&lt;/strong&gt; events never get lost and &lt;strong&gt;b)&lt;/strong&gt; they eventually get published to “the outside world” (e.g. to an Azure Service Bus topic).&lt;/p&gt;




&lt;p&gt;So, to finally read the stream of events and send them to a message broker, let’s use one of the “unsung hero features” [&lt;a href="https://devblogs.microsoft.com/cosmosdb/change-feed-unsung-hero-of-azure-cosmos-db/" rel="noopener noreferrer"&gt;4&lt;/a&gt;] of Cosmos DB: &lt;strong&gt;Change Feed&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The Change Feed is a persistent log of changes in your container that is operating in the background keeping track of modifications in the order they occurred – per logical partition. So, when you read the Change Feed, it is guaranteed that – for a certain partition key – you read the changes of those documents always in the correct order. This is mandatory for our scenario. Frankly, this is the reason why we put both the contact and corresponding event documents in the same partition: when we read the Change Feed, we can be sure that we never get an “updated” before a “created” event.&lt;/p&gt;

&lt;p&gt;So, how do we read the Change Feed then? You have multiple options. The most convenient way is to use an Azure Function with a Cosmos DB trigger. Here, you have everything in place and if you want to host that part of the application on the Azure Functions service, you are good to go. Another option is to use the Change Feed Processor library [&lt;a href="https://docs.microsoft.com/en-us/azure/cosmos-db/sql/change-feed-processor" rel="noopener noreferrer"&gt;5&lt;/a&gt;]. It lets you integrate Change Feed processing in your Web API e.g. as a background service (via &lt;em&gt;IHostedService&lt;/em&gt; interface). In this sample, we simply create a console application that uses the abstract class &lt;em&gt;BackgroundService&lt;/em&gt; for implementing long running background tasks in .NET Core.&lt;/p&gt;

&lt;p&gt;Now, to receive the changes from the Cosmos DB Change Feed, you need to instantiate a &lt;em&gt;ChangeFeedProcessor&lt;/em&gt; object, register a handler method for message processing and start listening for changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;
&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ChangeFeedProcessor&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; 
    &lt;span class="nf"&gt;StartChangeFeedProcessorAsync&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;changeFeedProcessor&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_container&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetChangeFeedProcessorBuilder&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ExpandoObject&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;(&lt;/span&gt;
            &lt;span class="n"&gt;_configuration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetSection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Cosmos"&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="s"&gt;"ProcessorName"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="n"&gt;HandleChangesAsync&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WithInstanceName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Environment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MachineName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WithLeaseContainer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_leaseContainer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WithMaxItems&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;25&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WithStartTime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;DateTime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;2000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;DateTimeKind&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Utc&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WithPollInterval&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TimeSpan&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;FromSeconds&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Build&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;LogInformation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Starting Change Feed Processor..."&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;changeFeedProcessor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;StartAsync&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;LogInformation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Change Feed Processor started.  
&lt;/span&gt;        &lt;span class="n"&gt;Waiting&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;messages&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;arrive&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="s"&gt;");
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;changeFeedProcessor&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The handler method is then responsible for processing the messages. In this sample, we keep things simple and publish the events to an Azure Service Bus topic which is partitioned for scalability and has the “deduplication” feature enabled [&lt;a href="https://docs.microsoft.com/en-us/azure/service-bus-messaging/duplicate-detection" rel="noopener noreferrer"&gt;6&lt;/a&gt;] – more on that in a second. From there on, any service interested in changes to &lt;em&gt;Contact&lt;/em&gt; objects can subscribe to that topic and receive and process those changes for its own context.&lt;/p&gt;

&lt;p&gt;You’ll also see that the Service Bus messages have a &lt;em&gt;SessionId&lt;/em&gt; property. By using sessions in Azure Service Bus, we guarantee that the order of the messages is preserved (FIFO) [&lt;a href="https://docs.microsoft.com/en-us/azure/service-bus-messaging/message-sessions" rel="noopener noreferrer"&gt;7&lt;/a&gt;] – which is necessary for our use case. Here is the snippet that handles messages from the Change Feed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;
&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt; &lt;span class="nf"&gt;HandleChangesAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;IReadOnlyCollection&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ExpandoObject&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;changes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;CancellationToken&lt;/span&gt; &lt;span class="n"&gt;cancellationToken&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;LogInformation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;$"Received &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;changes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Count&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s"&gt; document(s)."&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;eventsCount&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="n"&gt;Dictionary&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ServiceBusMessage&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;partitionedMessages&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;foreach&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;document&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;changes&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kt"&gt;dynamic&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(!((&lt;/span&gt;&lt;span class="n"&gt;IDictionary&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;object&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;)&lt;/span&gt;&lt;span class="n"&gt;document&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;ContainsKey&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;||&lt;/span&gt;
            &lt;span class="p"&gt;!((&lt;/span&gt;&lt;span class="n"&gt;IDictionary&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;object&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;)&lt;/span&gt;&lt;span class="n"&gt;document&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;ContainsKey&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;continue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// unknown doc type&lt;/span&gt;

        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;type&lt;/span&gt; &lt;span class="p"&gt;==&lt;/span&gt; &lt;span class="n"&gt;EVENT_TYPE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// domainEvent&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;JsonConvert&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;SerializeObject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;sbMessage&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ServiceBusMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;ContentType&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"application/json"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;Subject&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;MessageId&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;PartitionKey&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;partitionKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;SessionId&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;partitionKey&lt;/span&gt;
            &lt;span class="p"&gt;};&lt;/span&gt;

            &lt;span class="c1"&gt;// Create message batch per partitionKey&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;partitionedMessages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ContainsKey&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;partitionKey&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;partitionedMessages&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;sbMessage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;PartitionKey&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sbMessage&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="k"&gt;else&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;partitionedMessages&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;sbMessage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;PartitionKey&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ServiceBusMessage&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;sbMessage&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;

            &lt;span class="n"&gt;eventsCount&lt;/span&gt;&lt;span class="p"&gt;++;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;partitionedMessages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Count&lt;/span&gt; &lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;LogInformation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;$"Processing &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;eventsCount&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s"&gt; event(s) in &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;partitionedMessages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Count&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s"&gt; partition(s)."&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="c1"&gt;// Loop over each partition&lt;/span&gt;
        &lt;span class="k"&gt;foreach&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;partition&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;partitionedMessages&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;// Create batch for partition&lt;/span&gt;
            &lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;var&lt;/span&gt; &lt;span class="n"&gt;messageBatch&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt;
                &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;_topicSender&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateMessageBatchAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cancellationToken&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="k"&gt;foreach&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;partition&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(!&lt;/span&gt;&lt;span class="n"&gt;messageBatch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;TryAddMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
                    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

            &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;LogInformation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="s"&gt;$"Sending &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;messageBatch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Count&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s"&gt; event(s) to Service Bus. PartitionId: &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;partition&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Key&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

            &lt;span class="k"&gt;try&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;_topicSender&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;SendMessagesAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;messageBatch&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cancellationToken&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Exception&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;LogError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                &lt;span class="k"&gt;throw&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;LogInformation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"No event documents in change feed batch. Waiting for new messages to arrive."&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What happens when errors occur?
&lt;/h3&gt;

&lt;p&gt;In case of an error while processing the changes, the Change Feed library will restart reading messages at the position where it successfully processed the last batch. That means, if we already have processed 10.000 messages, are now in the middle of working on messages 10.001 to 10.0025 (we read documents in batches of 25 – see the initialization of the &lt;em&gt;ChangeFeedProcessor&lt;/em&gt; object) and an error happens, we can simply restart our process and it will pick up its work at position 10.001. The library keeps track of what has already been processed via a &lt;em&gt;Leases&lt;/em&gt; container.&lt;/p&gt;

&lt;p&gt;Of course, it can happen that – out of the 25 messages that are then reprocessed – our application already sent 10 to Azure Service Bus. This is where the deduplication feature will save our life. Azure Service Bus will check if a message has already been added to a topic during a specified time window based on the application-controlled &lt;em&gt;MessageId&lt;/em&gt; property of a message. That property is set to the &lt;em&gt;Id&lt;/em&gt; of the event document, meaning Azure Service Bus will ignore and drop a message, if a certain event has already been successfully added to our topic.&lt;/p&gt;

&lt;p&gt;In a typical “Transactional Outbox Pattern” implementation, we would now update the processed events and set a &lt;em&gt;Processed&lt;/em&gt; property to &lt;code&gt;true&lt;/code&gt;, indicating that we successfully published it – and every now and then, the events would then be deleted to keep only the most recent records/documents. Of course, this could be addressed manually in the handler method.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But we are using Cosmos DB!&lt;/strong&gt; We are already keeping track of events that were processed by using the Change Feed (in combination with the &lt;em&gt;Leases&lt;/em&gt; container). And to periodically clean-up the events, we can leverage another useful feature of Cosmos DB: Time-To-Live (&lt;em&gt;TTL&lt;/em&gt;) on documents [3]. Cosmos DB provides the ability to automatically delete documents based on a &lt;em&gt;TTL&lt;/em&gt; property that can be added to a document – a timespan in seconds. The service will constantly check your container for documents with such a &lt;em&gt;TTL&lt;/em&gt; property and as soon as it has expired, Cosmos DB will remove it from the container (btw, using the remaining Request Units in a background task – it will never “eat up” RUs that you need to handle user requests).&lt;/p&gt;




&lt;p&gt;So, when all the services work as expected, events will be processed and published fast – meaning within seconds. If we have an error in Cosmos DB, we do not even publish any events, because nothing (both business object as well as corresponding events) can’t be saved at all – users will receive an error.&lt;/p&gt;

&lt;p&gt;The only thing we need to consider and therefore set an appropriate &lt;em&gt;TTL&lt;/em&gt; value on our &lt;em&gt;DomainEvent&lt;/em&gt; documents, is when the background worker application (Change Feed Processor) or the Azure Service Bus aren’t available.&lt;/p&gt;

&lt;p&gt;How much time do we grant both components to eventually publish the events? Let’s have a look at the SLA for Azure Service Bus: Microsoft guarantees 99.9% uptime. That means a downtime of ~9h per year or ~44mins per month. Of course, we should give our service more time than a duration of a “possible outage” to process changes. Furthermore, we should also consider that our Change Feed Processor worker service can also be down. To be on the safe side, I’d pick 10 days (&lt;em&gt;TTL&lt;/em&gt; expects seconds, so a value of &lt;em&gt;864.000&lt;/em&gt;) for a production environment. All components involved will then have more than one week to process / publish changes within our application. That should be enough even in the case of a “disaster” like an Azure region outage. And after 10 days, the container holding the events will be cleaned-up automatically by Cosmos DB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sidenote:&lt;/strong&gt; As mentioned before, that’s not necessary. If you want to keep a log of all the changes for a &lt;em&gt;Contact&lt;/em&gt; object, you can also set the &lt;em&gt;TTL&lt;/em&gt; to &lt;em&gt;-1&lt;/em&gt; on event documents and Cosmos DB won’t purge them at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages
&lt;/h2&gt;

&lt;p&gt;As a result, we now have a service that can handle &lt;em&gt;Contact&lt;/em&gt; business objects and every time such a contact gets added or modified, it will raise a corresponding event which will be published within the application right before saving the actual object. These events will be picked up by an event handler in our service and added to the same “database context” which will save both the business object and events in the same transaction to Cosmos DB. Hence, we can guarantee that no event will ever be lost. Furthermore, we leverage the Cosmos DB Change Feed then to publish the tracked events in the background to an Azure Service Bus Topic via a background worker service that makes use of the &lt;em&gt;ChangeFeedProcessor&lt;/em&gt; library.&lt;/p&gt;

&lt;p&gt;So, in the end this is not a “traditional” implementation of the “Transactional Outbox Pattern”, because we leverage some features of Cosmos DB that makes things easier for a developer.&lt;/p&gt;

&lt;p&gt;What are the advantages of this solution then? We have…&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;guaranteed delivery of events&lt;/li&gt;
&lt;li&gt;guaranteed ordering of events and deduplication via Azure Service Bus&lt;/li&gt;
&lt;li&gt;no need to maintain an extra &lt;em&gt;Processed&lt;/em&gt; property that indicates a successful processing of an event document – the solution makes use of Time-To-Live and Change Feed features in Cosmos DB&lt;/li&gt;
&lt;li&gt;error proof processing of messages via ChangeFeedProcessor (or Azure Function)&lt;/li&gt;
&lt;li&gt;optional: you can add multiple Change Feed processors – each one maintaining its own “pointer” enabling additional scenarios&lt;/li&gt;
&lt;li&gt;if you use a multi-master/multi-region Cosmos DB account, you’ll even get “five nines” (99.999%) availability – which is outstanding!&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This sample demonstrated how to implement the &lt;strong&gt;Transactional Outbox Pattern&lt;/strong&gt; with Azure Cosmos DB. We made use of some of the &lt;em&gt;hero features&lt;/em&gt; of Cosmos DB to simplify the implementation of the pattern.&lt;/p&gt;

&lt;p&gt;You can find the source code for the sample application including the change feed processor on the &lt;a href="https://github.com/mspnp/transactional-outbox-pattern" rel="noopener noreferrer"&gt;Microsoft Patterns &amp;amp; Practices repo&lt;/a&gt;. There, you’ll also find a bicep deployment script to create all the necessary Azure resources for you automatically.&lt;/p&gt;

&lt;p&gt;Keep in mind that this is not a “production ready” implementation, but it hopefully can act as an inspiration for your next adventure.&lt;/p&gt;

&lt;p&gt;Happy hacking! 😊&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Implement the Transactional Outbox Pattern with @AzureCosmosDB and @Azure #ServiceBus #distributed #pattern #microservices&lt;/p&gt;

&lt;p&gt;&lt;a href="http://twitter.com/share?&amp;amp;text=Implement%20the%20Transactional%20Outbox%20Pattern%20with%20%40AzureCosmosDB%20and%20%40Azure%20%23ServiceBus%20%23distributed%20%23pattern%20%23microservices&amp;amp;url=https://partlycloudy.blog/2021/08/30/using-the-transactional-outbox-pattern-with-azure-cosmos-db-for-guaranteed-delivery-of-domain-events/&amp;amp;via=chrisdennig" rel="noopener noreferrer"&gt;Tweet&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[1] – Jimmy Bogard – &lt;a href="https://lostechies.com/jimmybogard/2014/05/13/a-better-domain-events-pattern/" rel="noopener noreferrer"&gt;A better domain events pattern&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;[2] – Cosmos DB optimistic concurrency control – &lt;a href="https://docs.microsoft.com/en-us/azure/cosmos-db/sql/database-transactions-optimistic-concurrency#optimistic-concurrency-control" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/cosmos-db/sql/database-transactions-optimistic-concurrency#optimistic-concurrency-control&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;[3] – Cosmos DB Time-To-Live feature – &lt;a href="https://docs.microsoft.com/en-us/azure/cosmos-db/sql/time-to-live" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/cosmos-db/sql/time-to-live&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;[4] – Change Feed – Unsung Hero of Azure Cosmos DB – &lt;a href="https://devblogs.microsoft.com/cosmosdb/change-feed-unsung-hero-of-azure-cosmos-db/" rel="noopener noreferrer"&gt;https://devblogs.microsoft.com/cosmosdb/change-feed-unsung-hero-of-azure-cosmos-db/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;[5] – Change Feed Processor library – &lt;a href="https://docs.microsoft.com/en-us/azure/cosmos-db/sql/change-feed-processor" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/cosmos-db/sql/change-feed-processor&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;[6] – Azure Service Bus – Deduplication – &lt;a href="https://docs.microsoft.com/en-us/azure/service-bus-messaging/duplicate-detection" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/service-bus-messaging/duplicate-detection&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;[7] – Azure Service Bus – FIFO – &lt;a href="https://docs.microsoft.com/en-us/azure/service-bus-messaging/message-sessions" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/service-bus-messaging/message-sessions&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Header Image credits: &lt;a href="https://unsplash.com/@pawel_czerwinski" rel="noopener noreferrer"&gt;https://unsplash.com/@pawel_czerwinski&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Additional Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Cosmos DB Transactional Batch – &lt;a href="https://docs.microsoft.com/en-us/azure/cosmos-db/transactional-batch" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/cosmos-db/transactional-batch&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Mediator implementation by Jimmy Bogard – &lt;a href="https://github.com/jbogard/MediatR" rel="noopener noreferrer"&gt;MediatR&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Domain-Driven-Design – &lt;a href="https://docs.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Source Code
&lt;/h2&gt;

&lt;p&gt;The sample source code for this article can be found on GitHub: &lt;a href="https://github.com/mspnp/transactional-outbox-pattern" rel="noopener noreferrer"&gt;https://github.com/mspnp/transactional-outbox-pattern&lt;/a&gt;.&lt;/p&gt;




</description>
      <category>azure</category>
      <category>cosmosdb</category>
      <category>dotnet</category>
      <category>ddd</category>
    </item>
    <item>
      <title>Getting started with KrakenD on Kubernetes / AKS</title>
      <dc:creator>Christian Dennig</dc:creator>
      <pubDate>Wed, 17 Feb 2021 13:43:02 +0000</pubDate>
      <link>https://forem.com/cdennig/getting-started-with-krakend-on-kubernetes-aks-7hh</link>
      <guid>https://forem.com/cdennig/getting-started-with-krakend-on-kubernetes-aks-7hh</guid>
      <description>&lt;p&gt;If you develop applications in a cloud-native environment and, for example, rely on the “microservices” architecture pattern, you will sooner or later have to deal with the issue of “API gateways”. There is a wide range of offerings available “in the wild”, both as managed versions from various cloud providers, as well as from the open source domain. Many often think of the well-known OSS projects such as “Kong”, “tyk” or “gloo” when it comes to API gateways. The same is true for me. However, when I took a closer look at the projects, I wasn’t always satisfied with the feature set. I was always looking for product that can be hosted in your Kubernetes cluster, is flexible and easy to configure (“desired state”) and offers good performance. During my work as a cloud solution architect at Microsoft, I became aware of the OSS API Gateway “KrakenD” during a project about 1.5 years ago.&lt;/p&gt;

&lt;h2&gt;
  
  
  KrakenD API Gateway
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://krakend.io" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F02%2Fkrakend-api-gateway.png%3Fw%3D300" alt="krakend logo"&gt;&lt;/a&gt;KrakenD logo&lt;/p&gt;

&lt;p&gt;&lt;a href="https://krakend.io" rel="noopener noreferrer"&gt;KrakenD&lt;/a&gt; is an API gateway implemented in Go that relies on the ultra-fast &lt;a href="https://github.com/gin-gonic/gin" rel="noopener noreferrer"&gt;GIN framework&lt;/a&gt; under the hood. It offers an incredible number of features out-of-the-box that can be used to implement about any gateway requirement:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;request proxying and aggregation (merge multiple responses)&lt;/li&gt;
&lt;li&gt;decoding (from JSON, XML…)&lt;/li&gt;
&lt;li&gt;filtering (allow- and block-lists)&lt;/li&gt;
&lt;li&gt;request &amp;amp; response transformation&lt;/li&gt;
&lt;li&gt;caching&lt;/li&gt;
&lt;li&gt;circuit breaker pattern via configuration, timeouts…&lt;/li&gt;
&lt;li&gt;protocol translation&lt;/li&gt;
&lt;li&gt;JWT validation / signing&lt;/li&gt;
&lt;li&gt;SSL&lt;/li&gt;
&lt;li&gt;OAuth2&lt;/li&gt;
&lt;li&gt;Prometheus/OpenCensus integration&lt;/li&gt;
&lt;li&gt;…&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see, this is quite an extensive list of features, which is nevertheless far from being “complete”. On their homepage and documentation, you can find much more information, what the product offers in its entirety​.&lt;/p&gt;

&lt;p&gt;The creators also recently published an Azure Marketplace offer, a container image that you can directly push / integrate to your Azure Container Registry…so I thought, it’s an appropriate time to publish a blog post about how to get started with KrakenD on Azure Kubernetes Service (AKS).&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with KrakenD on AKS
&lt;/h2&gt;

&lt;p&gt;Ok, let’s get started then. First, we need a Kubernetes cluster on which we can roll out a sample application that we want to expose via KrakenD. So, as with all Azure deployments, let’s start with a resource group and then add a corresponding AKS service. We will be using the &lt;a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli" rel="noopener noreferrer"&gt;Azure Command Line Interface&lt;/a&gt; for this, but you can also create the cluster via the Azure Portal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# create an Azure resource group&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;az group create &lt;span class="nt"&gt;--name&lt;/span&gt; krakend-aks-rg &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="nt"&gt;--location&lt;/span&gt; westeurope

&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"id"&lt;/span&gt;: &lt;span class="s2"&gt;"/subscriptions/xxx/resourceGroups/krakend-aks-rg"&lt;/span&gt;,
  &lt;span class="s2"&gt;"location"&lt;/span&gt;: &lt;span class="s2"&gt;"westeurope"&lt;/span&gt;,
  &lt;span class="s2"&gt;"managedBy"&lt;/span&gt;: null,
  &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"krakend-aks-rg"&lt;/span&gt;,
  &lt;span class="s2"&gt;"properties"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"provisioningState"&lt;/span&gt;: &lt;span class="s2"&gt;"Succeeded"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;,
  &lt;span class="s2"&gt;"tags"&lt;/span&gt;: null,
  &lt;span class="s2"&gt;"type"&lt;/span&gt;: &lt;span class="s2"&gt;"Microsoft.Resources/resourceGroups"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# create a Kubernetes cluster&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;az aks create &lt;span class="nt"&gt;-g&lt;/span&gt; krakend-aks-rg &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="nt"&gt;-n&lt;/span&gt; krakend-aks &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="nt"&gt;--enable-managed-identity&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="nt"&gt;--generate-ssh-keys&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a few minutes, the cluster has been created and we can download the access credentials to our workstation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;az aks get-credentials &lt;span class="nt"&gt;-g&lt;/span&gt; krakend-aks-rg &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="nt"&gt;-n&lt;/span&gt; krakend-aks 

&lt;span class="c"&gt;# in case you don't have kubectl on your &lt;/span&gt;
&lt;span class="c"&gt;# machine, there's a handy installer coming with &lt;/span&gt;
&lt;span class="c"&gt;# the Azure CLI:&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;az aks install-cli

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s check, if we have access to the cluster…&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes

NAME STATUS ROLES AGE VERSION
aks-nodepool1-34625029-vmss000000 Ready agent 24h v1.18.14
aks-nodepool1-34625029-vmss000001 Ready agent 24h v1.18.14
aks-nodepool1-34625029-vmss000002 Ready agent 24h v1.18.14

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Looks great and we are all set from an infrastructure perspective. Let’s add a service that we can expose via KrakenD.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add a sample service
&lt;/h2&gt;

&lt;p&gt;We are now going to deploy a very simple service implemented in dotnet core, that is capable of creating / storing “contact” objects in a MS SQL server 2019 (Linux) that is running – for convenience reasons – on the same Kubernetes cluster as a single container/pod. After having the services deployed, the in-cluster situation looks like that:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F02%2Fach-no-gw.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F02%2Fach-no-gw.png%3Fw%3D1024"&gt;&lt;/a&gt;In-cluster architecture /wo KrakenD&lt;/p&gt;

&lt;p&gt;Let’s deploy everything. First, the MS SQL server with its service definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="c1"&gt;# content of sql-server.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mssql-deployment&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mssql&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mssql&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;terminationGracePeriodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt;
      &lt;span class="na"&gt;securityContext&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;fsGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10001&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mssql&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mcr.microsoft.com/mssql/server:2019-latest&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1433&lt;/span&gt;
          &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MSSQL_PID&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Developer'&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ACCEPT_EULA&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Y'&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SA_PASSWORD&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Ch@ngeMe!23'&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mssqlsvr&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mssql&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1433&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1433&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a file called &lt;code&gt;sql-server.yaml&lt;/code&gt; and apply it to the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; sql-server.yaml

deployment.apps/mssql-deployment created
service/mssqlsvr created

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Second, the contacts API plus a service definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="c1"&gt;# content of contacts-app.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ca-deploy&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;scmcontacts&lt;/span&gt;
    &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;contactsapi&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RollingUpdate&lt;/span&gt;
    &lt;span class="na"&gt;rollingUpdate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;maxSurge&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;maxUnavailable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;minReadySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;revisionHistoryLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;scmcontacts&lt;/span&gt;
      &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;contactsapi&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;scmcontacts&lt;/span&gt;
        &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;contactsapi&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;automountServiceAccountToken&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application&lt;/span&gt;
          &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;64Mi'&lt;/span&gt;
              &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;100m'&lt;/span&gt;
            &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;256Mi'&lt;/span&gt;
              &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;500m'&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io/azuredevcollege/adc-contacts-api:3.0&lt;/span&gt;
          &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConnectionStrings__DefaultConnectionString&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Server=tcp:mssqlsvr,1433;Initial&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Catalog=scmcontactsdb;Persist&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Security&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Info=False;User&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ID=sa;Password=Ch@ngeMe!23;MultipleActiveResultSets=False;Encrypt=False;TrustServerCertificate=True;Connection&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Timeout=30;"&lt;/span&gt;
          &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IfNotPresent&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5000&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;contacts&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;scmcontacts&lt;/span&gt;
    &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;contactsapi&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;scmcontacts&lt;/span&gt;
    &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;contactsapi&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5000&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a file called &lt;code&gt;contacts-app.yaml&lt;/code&gt; and apply it to the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; contacts-app.yaml

deployment.apps/ca-deploy created
service/contacts created

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check, if the contact pods can communicate with the MSSQL server, let’s quickly spin up an interactive pod and issue a few requests from within the cluster. As you can see in the YAML manifests, the services have been added as type &lt;code&gt;ClusterIP&lt;/code&gt; which means they don’t get an external IP address. Exposing the contacts service to the public will be the responsibility of KrakenD.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--image&lt;/span&gt; csaocpger/httpie:1.0 http &lt;span class="nt"&gt;--restart&lt;/span&gt; Never &lt;span class="nt"&gt;--&lt;/span&gt; /bin/sh


&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'{"firstname": "Satya", "lastname": "Nadella", "email": "satya@microsoft.com", "company": "Microsoft", "avatarLocation": "", "phone": "+1 32 6546 6545", "mobile": "+1 32 6546 6542", "description": "CEO of Microsoft", "street": "Street", "houseNumber": "1", "city": "Redmond", "postalCode": "123456", "country": "USA"}'&lt;/span&gt; | http POST http://contacts:8080/api/contacts

HTTP/1.1 201 Created
Content-Type: application/json&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;charset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;utf-8
Date: Wed, 17 Feb 2021 10:58:57 GMT
Location: http://contacts:8080/api/contacts/ee176782-a767-45ad-a7df-dbcefef22688
Server: Kestrel
Transfer-Encoding: chunked

&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"avatarLocation"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
    &lt;span class="s2"&gt;"city"&lt;/span&gt;: &lt;span class="s2"&gt;"Redmond"&lt;/span&gt;,
    &lt;span class="s2"&gt;"company"&lt;/span&gt;: &lt;span class="s2"&gt;"Microsoft"&lt;/span&gt;,
    &lt;span class="s2"&gt;"country"&lt;/span&gt;: &lt;span class="s2"&gt;"USA"&lt;/span&gt;,
    &lt;span class="s2"&gt;"description"&lt;/span&gt;: &lt;span class="s2"&gt;"CEO of Microsoft"&lt;/span&gt;,
    &lt;span class="s2"&gt;"email"&lt;/span&gt;: &lt;span class="s2"&gt;"satya@microsoft.com"&lt;/span&gt;,
    &lt;span class="s2"&gt;"firstname"&lt;/span&gt;: &lt;span class="s2"&gt;"Satya"&lt;/span&gt;,
    &lt;span class="s2"&gt;"houseNumber"&lt;/span&gt;: &lt;span class="s2"&gt;"1"&lt;/span&gt;,
    &lt;span class="s2"&gt;"id"&lt;/span&gt;: &lt;span class="s2"&gt;"ee176782-a767-45ad-a7df-dbcefef22688"&lt;/span&gt;,
    &lt;span class="s2"&gt;"lastname"&lt;/span&gt;: &lt;span class="s2"&gt;"Nadella"&lt;/span&gt;,
    &lt;span class="s2"&gt;"mobile"&lt;/span&gt;: &lt;span class="s2"&gt;"+1 32 6546 6542"&lt;/span&gt;,
    &lt;span class="s2"&gt;"phone"&lt;/span&gt;: &lt;span class="s2"&gt;"+1 32 6546 6545"&lt;/span&gt;,
    &lt;span class="s2"&gt;"postalCode"&lt;/span&gt;: &lt;span class="s2"&gt;"123456"&lt;/span&gt;,
    &lt;span class="s2"&gt;"street"&lt;/span&gt;: &lt;span class="s2"&gt;"Street"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;http GET http://contacts:8080/api/contacts
HTTP/1.1 200 OK
Content-Type: application/json&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;charset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;utf-8
Date: Wed, 17 Feb 2021 11:00:07 GMT
Server: Kestrel
Transfer-Encoding: chunked

&lt;span class="o"&gt;[&lt;/span&gt;
    &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"avatarLocation"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
        &lt;span class="s2"&gt;"city"&lt;/span&gt;: &lt;span class="s2"&gt;"Redmond"&lt;/span&gt;,
        &lt;span class="s2"&gt;"company"&lt;/span&gt;: &lt;span class="s2"&gt;"Microsoft"&lt;/span&gt;,
        &lt;span class="s2"&gt;"country"&lt;/span&gt;: &lt;span class="s2"&gt;"USA"&lt;/span&gt;,
        &lt;span class="s2"&gt;"description"&lt;/span&gt;: &lt;span class="s2"&gt;"CEO of Microsoft"&lt;/span&gt;,
        &lt;span class="s2"&gt;"email"&lt;/span&gt;: &lt;span class="s2"&gt;"satya@microsoft.com"&lt;/span&gt;,
        &lt;span class="s2"&gt;"firstname"&lt;/span&gt;: &lt;span class="s2"&gt;"Satya"&lt;/span&gt;,
        &lt;span class="s2"&gt;"houseNumber"&lt;/span&gt;: &lt;span class="s2"&gt;"1"&lt;/span&gt;,
        &lt;span class="s2"&gt;"id"&lt;/span&gt;: &lt;span class="s2"&gt;"ee176782-a767-45ad-a7df-dbcefef22688"&lt;/span&gt;,
        &lt;span class="s2"&gt;"lastname"&lt;/span&gt;: &lt;span class="s2"&gt;"Nadella"&lt;/span&gt;,
        &lt;span class="s2"&gt;"mobile"&lt;/span&gt;: &lt;span class="s2"&gt;"+1 32 6546 6542"&lt;/span&gt;,
        &lt;span class="s2"&gt;"phone"&lt;/span&gt;: &lt;span class="s2"&gt;"+1 32 6546 6545"&lt;/span&gt;,
        &lt;span class="s2"&gt;"postalCode"&lt;/span&gt;: &lt;span class="s2"&gt;"123456"&lt;/span&gt;,
        &lt;span class="s2"&gt;"street"&lt;/span&gt;: &lt;span class="s2"&gt;"Street"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;]&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, we can create new contacts by &lt;code&gt;POST&lt;/code&gt;ing a JSON payload to the endpoint &lt;code&gt;http://contacts:8080/api/contacts&lt;/code&gt; (first request) and also retrieve what has been added to the database by &lt;code&gt;GET&lt;/code&gt;ing data from &lt;code&gt;http://contacts:8080/api/contacts&lt;/code&gt; endpoint (second request).&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a KrakenD Configuration
&lt;/h2&gt;

&lt;p&gt;So far, everything works as expected and we have a working API in the cluster that is storing its data in a MSSQL server. As discussed in the previous section, we did not expose the contacts service to the internet on purpose. We will do this later by adding KrakenD in front of that service giving the API gateway a public IP so that it is externally reachable.&lt;/p&gt;

&lt;p&gt;But first, we need to create a KrakenD configuration (a plain JSON file) where we configure the endpoints, backend services, how requests should be routed etc. etc. Fortunately, KrakenD has a very easy-to-use designer that gives you a head-start when creating that configuration file – it’s simply called the &lt;a href="https://designer.krakend.io/" rel="noopener noreferrer"&gt;KrakenDesigner&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F02%2Fkrakendesigner.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F02%2Fkrakendesigner.png%3Fw%3D1024" alt="kraken designer"&gt;&lt;/a&gt;KrakenDesigner – sample service&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F02%2Fkrakendesigner_logging.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F02%2Fkrakendesigner_logging.png%3Fw%3D1024" alt="kraken designer logging config"&gt;&lt;/a&gt;KrakenDesigner – logging configuration&lt;/p&gt;

&lt;p&gt;When creating such a configuration, it comes down to these simple steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Adjust “common” configuration for KrakenD like service name, port, CORS, exposed/allowed headers etc.&lt;/li&gt;
&lt;li&gt;Add backend services, in our case just the Kubernetes service for our contacts API (&lt;code&gt;http://contacts:8080&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Exposed endpoints (&lt;code&gt;/contacts&lt;/code&gt;) &lt;strong&gt;at the gateway&lt;/strong&gt; and to which backend to route to (&lt;code&gt;http:/contacts:8080/api/contacts&lt;/code&gt;). Here you can also define, if a JWT token should be validated, which headers to pass to the backend etc. A lot of options – which we obviously don’t need in our simple setup.&lt;/li&gt;
&lt;li&gt;Add logging configuration – it’s optional, but you should do it. We simply enable &lt;code&gt;stdout&lt;/code&gt;logging, but you can also use OpenCensuse.g. and even expose metrics to a Prometheus instance (nice!).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can export the configuration you have done in the UI as a last step to a JSON file. For our sample here, this file looks like that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"extra_config"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"github_com/devopsfaith/krakend-cors"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"allow_origins"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"expose_headers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="s2"&gt;"Content-Length"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="s2"&gt;"Location"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"max_age"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"12h"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"allow_methods"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="s2"&gt;"GET"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="s2"&gt;"POST"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="s2"&gt;"PUT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="s2"&gt;"DELETE"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="s2"&gt;"OPTIONS"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"github_com/devopsfaith/krakend-gologging"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"INFO"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"prefix"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"[KRAKEND]"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"syslog"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"stdout"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"format"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"timeout"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"3000ms"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"cache_ttl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"300s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"output_encoding"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"json"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"contacts"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"port"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"endpoints"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"endpoint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/contacts"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"method"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GET"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"output_encoding"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"no-op"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"extra_config"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"backend"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"url_pattern"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/api/contacts"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"encoding"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"no-op"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"sd"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"static"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"method"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GET"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"extra_config"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"host"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"http://contacts:8080"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"disable_host_sanitize"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"endpoint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/contacts"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"method"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"POST"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"output_encoding"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"no-op"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"extra_config"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"backend"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"url_pattern"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/api/contacts"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"encoding"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"no-op"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"sd"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"static"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"method"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"POST"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"extra_config"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"host"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"http://contacts:8080"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"disable_host_sanitize"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We simply expose two endpoints, one that let’s us create (POST) contacts and one that retrieves (GET) all contacts from the database – so basically the same sample we did when calling the contacts service from within the cluster.&lt;/p&gt;

&lt;p&gt;Save that file above to your local machine (name it &lt;code&gt;krakend.json&lt;/code&gt;) as we need to add it later to Kubernetes as a &lt;code&gt;ConfigMap&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add the KrakenD API Gateway
&lt;/h2&gt;

&lt;p&gt;So, now we are ready to deploy KrakenD to the cluster: we have an API that we want to expose and we have the KrakenD configuration. To dynamically add the configuration (krakend.json) to our running KrakenD instance, we will use a Kubernetes &lt;code&gt;ConfigMap&lt;/code&gt;object. This gives us the ability to decouple configuration from our KrakenD application instance/pod – if you are not familiar with the concepts, have a look at the official documentation &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;During the startup of KrakenD we will then use this &lt;code&gt;ConfigMap&lt;/code&gt;and mount the content of it (&lt;code&gt;krakend.json&lt;/code&gt; file) into the container (folder&lt;code&gt;/etc/krakend&lt;/code&gt;) so that the KrakenD process can pick it up and apply the configuration.&lt;/p&gt;

&lt;p&gt;In the folder where you saved the config file, issue the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create configmap krakend-cfg &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;./krakend-cfg.json

configmap/krakend-cfg created

&lt;span class="c"&gt;# check the contents of the configmap&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe configmap krakend-cfg

Name: krakend-cfg
Namespace: default
Labels: &amp;lt;none&amp;gt;
Annotations: &amp;lt;none&amp;gt;

Data
&lt;span class="o"&gt;====&lt;/span&gt;
krakend.json:
&lt;span class="nt"&gt;----&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"version"&lt;/span&gt;: 2,
    &lt;span class="s2"&gt;"extra_config"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"github_com/devopsfaith/krakend-cors"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"allow_origins"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
          &lt;span class="s2"&gt;"*"&lt;/span&gt;
        &lt;span class="o"&gt;]&lt;/span&gt;,
        &lt;span class="s2"&gt;"expose_headers"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
          &lt;span class="s2"&gt;"Content-Length"&lt;/span&gt;,
          &lt;span class="s2"&gt;"Location"&lt;/span&gt;
        &lt;span class="o"&gt;]&lt;/span&gt;,
        &lt;span class="s2"&gt;"max_age"&lt;/span&gt;: &lt;span class="s2"&gt;"12h"&lt;/span&gt;,
        &lt;span class="s2"&gt;"allow_methods"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
          &lt;span class="s2"&gt;"GET"&lt;/span&gt;,
          &lt;span class="s2"&gt;"POST"&lt;/span&gt;,
          &lt;span class="s2"&gt;"PUT"&lt;/span&gt;,
          &lt;span class="s2"&gt;"DELETE"&lt;/span&gt;,
          &lt;span class="s2"&gt;"OPTIONS"&lt;/span&gt;
        &lt;span class="o"&gt;]&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;,
      &lt;span class="s2"&gt;"github_com/devopsfaith/krakend-gologging"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"level"&lt;/span&gt;: &lt;span class="s2"&gt;"INFO"&lt;/span&gt;,
        &lt;span class="s2"&gt;"prefix"&lt;/span&gt;: &lt;span class="s2"&gt;"[KRAKEND]"&lt;/span&gt;,
        &lt;span class="s2"&gt;"syslog"&lt;/span&gt;: &lt;span class="nb"&gt;false&lt;/span&gt;,
        &lt;span class="s2"&gt;"stdout"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;,
        &lt;span class="s2"&gt;"format"&lt;/span&gt;: &lt;span class="s2"&gt;"default"&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;,
    &lt;span class="s2"&gt;"timeout"&lt;/span&gt;: &lt;span class="s2"&gt;"3000ms"&lt;/span&gt;,
    &lt;span class="s2"&gt;"cache_ttl"&lt;/span&gt;: &lt;span class="s2"&gt;"300s"&lt;/span&gt;,
    &lt;span class="s2"&gt;"output_encoding"&lt;/span&gt;: &lt;span class="s2"&gt;"json"&lt;/span&gt;,
    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"contacts"&lt;/span&gt;,
    &lt;span class="s2"&gt;"port"&lt;/span&gt;: 8080,
    &lt;span class="s2"&gt;"endpoints"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
      &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"endpoint"&lt;/span&gt;: &lt;span class="s2"&gt;"/contacts"&lt;/span&gt;,
        &lt;span class="s2"&gt;"method"&lt;/span&gt;: &lt;span class="s2"&gt;"GET"&lt;/span&gt;,
        &lt;span class="s2"&gt;"output_encoding"&lt;/span&gt;: &lt;span class="s2"&gt;"no-op"&lt;/span&gt;,
        &lt;span class="s2"&gt;"extra_config"&lt;/span&gt;: &lt;span class="o"&gt;{}&lt;/span&gt;,
        &lt;span class="s2"&gt;"backend"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
          &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"url_pattern"&lt;/span&gt;: &lt;span class="s2"&gt;"/api/contacts"&lt;/span&gt;,
            &lt;span class="s2"&gt;"encoding"&lt;/span&gt;: &lt;span class="s2"&gt;"no-op"&lt;/span&gt;,
            &lt;span class="s2"&gt;"sd"&lt;/span&gt;: &lt;span class="s2"&gt;"static"&lt;/span&gt;,
            &lt;span class="s2"&gt;"method"&lt;/span&gt;: &lt;span class="s2"&gt;"GET"&lt;/span&gt;,
            &lt;span class="s2"&gt;"extra_config"&lt;/span&gt;: &lt;span class="o"&gt;{}&lt;/span&gt;,
            &lt;span class="s2"&gt;"host"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
              &lt;span class="s2"&gt;"http://contacts"&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"disable_host_sanitize"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;
          &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;]&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;,
      &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"endpoint"&lt;/span&gt;: &lt;span class="s2"&gt;"/contacts"&lt;/span&gt;,
        &lt;span class="s2"&gt;"method"&lt;/span&gt;: &lt;span class="s2"&gt;"POST"&lt;/span&gt;,
        &lt;span class="s2"&gt;"output_encoding"&lt;/span&gt;: &lt;span class="s2"&gt;"no-op"&lt;/span&gt;,
        &lt;span class="s2"&gt;"extra_config"&lt;/span&gt;: &lt;span class="o"&gt;{}&lt;/span&gt;,
        &lt;span class="s2"&gt;"backend"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
          &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"url_pattern"&lt;/span&gt;: &lt;span class="s2"&gt;"/api/contacts"&lt;/span&gt;,
            &lt;span class="s2"&gt;"encoding"&lt;/span&gt;: &lt;span class="s2"&gt;"no-op"&lt;/span&gt;,
            &lt;span class="s2"&gt;"sd"&lt;/span&gt;: &lt;span class="s2"&gt;"static"&lt;/span&gt;,
            &lt;span class="s2"&gt;"method"&lt;/span&gt;: &lt;span class="s2"&gt;"POST"&lt;/span&gt;,
            &lt;span class="s2"&gt;"extra_config"&lt;/span&gt;: &lt;span class="o"&gt;{}&lt;/span&gt;,
            &lt;span class="s2"&gt;"host"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
              &lt;span class="s2"&gt;"http://contacts"&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"disable_host_sanitize"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;
          &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;]&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

Events: &amp;lt;none&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That looks great. We are finally ready to spin up KrakenD in the cluster. We therefor apply the following Kubernetes manifest file which creates a deployment and a Kubernetes service of type &lt;code&gt;LoadBalancer&lt;/code&gt; – which gives us a public IP address for KrakenD via the Azure loadbalancer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="c1"&gt;# content of api-gateway.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;krakend-deploy&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apigateway&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RollingUpdate&lt;/span&gt;
    &lt;span class="na"&gt;rollingUpdate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;maxSurge&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;maxUnavailable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;minReadySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;revisionHistoryLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apigateway&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apigateway&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;automountServiceAccountToken&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;krakend-cfg&lt;/span&gt;
          &lt;span class="na"&gt;configMap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;krakend-cfg&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application&lt;/span&gt;
          &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;64Mi'&lt;/span&gt;
              &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;100m'&lt;/span&gt;
            &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;1024Mi'&lt;/span&gt;
              &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;1000m'&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;devopsfaith/krakend:1.2&lt;/span&gt;
          &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IfNotPresent&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
          &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;krakend-cfg&lt;/span&gt;
            &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/krakend&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apigateway&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apigateway&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apigateway&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let me highlight the two important parts here, that mount the configuration file into our pod. First, we create a volume on line 26 (named &lt;code&gt;krakend-cfg&lt;/code&gt;) referencing the &lt;code&gt;ConfigMap&lt;/code&gt;we created before and second, we mount that volume (line 43) to our pod (&lt;code&gt;mountPath /etc/krakend)&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Save the manifest file and apply it to the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; api-gateway.yaml

deployment.apps/krakend-deploy created
service/apigateway created

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The resulting architecture within the cluster is now as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F02%2Fach-gw.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2021%2F02%2Fach-gw.png%3Fw%3D1024" alt="Architecture with krakend"&gt;&lt;/a&gt;Architecture with KrakenD API gateway&lt;/p&gt;

&lt;p&gt;As a last step, we just need to retrieve the public IP of our “LoadBalancer” service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get services

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
apigateway LoadBalancer 10.0.26.150 104.45.73.37 8080:31552/TCP 4h53m
contacts ClusterIP 10.0.155.35 &amp;lt;none&amp;gt; 8080/TCP 3h47m
kubernetes ClusterIP 10.0.0.1 &amp;lt;none&amp;gt; 443/TCP 26h
mssqlsvr ClusterIP 10.0.192.57 &amp;lt;none&amp;gt; 1433/TCP 3h59m

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, in our case here, we got &lt;code&gt;104.45.73.37.&lt;/code&gt; Let’s issue a few request (either with a browser or a tool like &lt;a href="https://httpie.io/" rel="noopener noreferrer"&gt;httpie&lt;/a&gt;– which I use all the time) against the resulting URL &lt;a href="http://104.45.73.37:8080/contacts" rel="noopener noreferrer"&gt;http://104.45.73.37:8080/contacts&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;http http://104.45.73.37:8080/contacts

HTTP/1.1 200 OK
Content-Length: 337
Content-Type: application/json&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;charset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;utf-8
Date: Wed, 17 Feb 2021 12:10:20 GMT
Server: Kestrel
Vary: Origin
X-Krakend: Version 1.2.0
X-Krakend-Completed: &lt;span class="nb"&gt;false&lt;/span&gt;

&lt;span class="o"&gt;[&lt;/span&gt;
    &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"avatarLocation"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
        &lt;span class="s2"&gt;"city"&lt;/span&gt;: &lt;span class="s2"&gt;"Redmond"&lt;/span&gt;,
        &lt;span class="s2"&gt;"company"&lt;/span&gt;: &lt;span class="s2"&gt;"Microsoft"&lt;/span&gt;,
        &lt;span class="s2"&gt;"country"&lt;/span&gt;: &lt;span class="s2"&gt;"USA"&lt;/span&gt;,
        &lt;span class="s2"&gt;"description"&lt;/span&gt;: &lt;span class="s2"&gt;"CEO of Microsoft"&lt;/span&gt;,
        &lt;span class="s2"&gt;"email"&lt;/span&gt;: &lt;span class="s2"&gt;"satya@microsoft.com"&lt;/span&gt;,
        &lt;span class="s2"&gt;"firstname"&lt;/span&gt;: &lt;span class="s2"&gt;"Satya"&lt;/span&gt;,
        &lt;span class="s2"&gt;"houseNumber"&lt;/span&gt;: &lt;span class="s2"&gt;"1"&lt;/span&gt;,
        &lt;span class="s2"&gt;"id"&lt;/span&gt;: &lt;span class="s2"&gt;"ee176782-a767-45ad-a7df-dbcefef22688"&lt;/span&gt;,
        &lt;span class="s2"&gt;"lastname"&lt;/span&gt;: &lt;span class="s2"&gt;"Nadella"&lt;/span&gt;,
        &lt;span class="s2"&gt;"mobile"&lt;/span&gt;: &lt;span class="s2"&gt;"+1 32 6546 6542"&lt;/span&gt;,
        &lt;span class="s2"&gt;"phone"&lt;/span&gt;: &lt;span class="s2"&gt;"+1 32 6546 6545"&lt;/span&gt;,
        &lt;span class="s2"&gt;"postalCode"&lt;/span&gt;: &lt;span class="s2"&gt;"123456"&lt;/span&gt;,
        &lt;span class="s2"&gt;"street"&lt;/span&gt;: &lt;span class="s2"&gt;"Street"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;]&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Works like a charm! Also, have a look at the logs of the KrakenD container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl logs krakend-deploy-86c44c787d-qczjh &lt;span class="nt"&gt;-f&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true

&lt;/span&gt;Parsing configuration file: /etc/krakend/krakend.json
&lt;span class="o"&gt;[&lt;/span&gt;KRAKEND] 2021/02/17 - 09:59:59.745 ▶ ERROR unable to create the GELF writer: getting the extra config &lt;span class="k"&gt;for &lt;/span&gt;the krakend-gelf module
&lt;span class="o"&gt;[&lt;/span&gt;KRAKEND] 2021/02/17 - 09:59:59.745 ▶ INFO Listening on port: 8080
&lt;span class="o"&gt;[&lt;/span&gt;KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN influxdb: unable to load custom config
&lt;span class="o"&gt;[&lt;/span&gt;KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN opencensus: no extra config defined &lt;span class="k"&gt;for &lt;/span&gt;the opencensus module
&lt;span class="o"&gt;[&lt;/span&gt;KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN building the etcd client: unable to create the etcd client: no config
&lt;span class="o"&gt;[&lt;/span&gt;KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN bloomFilter: no config &lt;span class="k"&gt;for &lt;/span&gt;the bloomfilter
&lt;span class="o"&gt;[&lt;/span&gt;KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN no config present &lt;span class="k"&gt;for &lt;/span&gt;the httpsecure module
&lt;span class="o"&gt;[&lt;/span&gt;KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: signer disabled &lt;span class="k"&gt;for &lt;/span&gt;the endpoint /contacts
&lt;span class="o"&gt;[&lt;/span&gt;KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: validator disabled &lt;span class="k"&gt;for &lt;/span&gt;the endpoint /contacts
&lt;span class="o"&gt;[&lt;/span&gt;KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: signer disabled &lt;span class="k"&gt;for &lt;/span&gt;the endpoint /contacts
&lt;span class="o"&gt;[&lt;/span&gt;KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: validator disabled &lt;span class="k"&gt;for &lt;/span&gt;the endpoint /contacts
&lt;span class="o"&gt;[&lt;/span&gt;KRAKEND] 2021/02/17 - 09:59:59.747 ▶ INFO registering usage stats &lt;span class="k"&gt;for &lt;/span&gt;cluster ID &lt;span class="s1"&gt;'293C0vbu4hqE6jM0BsSNl/HCzaAKsvjhSbHtWo9Hacc='&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;GIN] 2021/02/17 - 10:01:44 | 200 | 4.093438ms | 10.244.1.1 | GET &lt;span class="s2"&gt;"/contacts"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;GIN] 2021/02/17 - 10:01:46 | 200 | 5.397977ms | 10.244.1.1 | GET &lt;span class="s2"&gt;"/contacts"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;GIN] 2021/02/17 - 10:01:56 | 200 | 6.820172ms | 10.244.1.1 | GET &lt;span class="s2"&gt;"/contacts"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;GIN] 2021/02/17 - 10:01:57 | 200 | 5.911475ms | 10.244.1.1 | GET &lt;span class="s2"&gt;"/contacts"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As mentioned before, KrakenD logs its events to &lt;code&gt;stdout&lt;/code&gt;and we can see how the request are coming in, the destination and the time the request needed to complete at the gateway level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-Up
&lt;/h2&gt;

&lt;p&gt;In this brief article, I showed you how you can deploy KrakenD to an AKS/Kubernetes cluster on Azure and how to setup a first, simple sample of how to expose an API running in Kubernetes via the KrakenD API gateway. The project has so many useful features, that this post only covers the very, very basic stuff. I really encourage you to have a look at the product when you consider hosting an API gateway within your Kubernetes cluster. The folks at KrakenD do a great job and are also open and accept pull requests, if you want to &lt;a href="https://github.com/devopsfaith" rel="noopener noreferrer"&gt;contribute to the project&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As mentioned in the beginning of this article, they recently published a version of their KrakenD container image to the Azure Marketplace. This gives you the ability to directly push their current and future image to your own Azure Container Registry, enabling scenarios like static image scanning, Azure Security Center integration, geo-replication etc. You can find their offering here: &lt;a href="https://azuremarketplace.microsoft.com/en-us/marketplace/apps/brutale.krakend-ce?tab=Overview" rel="noopener noreferrer"&gt;KrakenD API Gateway&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope you enjoyed this brief introduction…happy hacking, friends! &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fs0.wp.com%2Fwp-content%2Fmu-plugins%2Fwpcom-smileys%2Ftwemoji%2F2%2F72x72%2F1f596.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fs0.wp.com%2Fwp-content%2Fmu-plugins%2Fwpcom-smileys%2Ftwemoji%2F2%2F72x72%2F1f596.png" alt="🖖"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>apigateway</category>
      <category>azure</category>
      <category>kubernetes</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Azure DevOps Terraform Provider</title>
      <dc:creator>Christian Dennig</dc:creator>
      <pubDate>Wed, 05 Aug 2020 18:33:00 +0000</pubDate>
      <link>https://forem.com/cdennig/azure-devops-terraform-provider-22fb</link>
      <guid>https://forem.com/cdennig/azure-devops-terraform-provider-22fb</guid>
      <description>&lt;p&gt;Not too long ago, the first version of the Azure DevOps Terraform Provider was released. In this article I will show you with several examples which features are currently supported in terms of build pipelines and how to use the provider – also in conjunction with Azure. The provider is the last “building block” for many people working in the “Infrastructure As Code” space to create environments (including Git Repos, service connections, build + release pipelines etc.) completely automatically.&lt;/p&gt;

&lt;p&gt;The provider was released in June 2020 in version &lt;em&gt;0.0.1&lt;/em&gt;, but to be honest: the feature set is quite rich already at this early stage.&lt;/p&gt;

&lt;p&gt;The features I would like to discuss with the help of examples are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a DevOps project including a hosted Git repo.&lt;/li&gt;
&lt;li&gt;Creation of a build pipeline&lt;/li&gt;
&lt;li&gt;Usage of variables and variable groups&lt;/li&gt;
&lt;li&gt;Creating an Azure service connection and using variables/secrets from an Azure KeyVault&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Example 1: Basic Usage&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The Azure DevOps provider can be integrated in a script like any other Terraform provider. All that’s required is the URL to the DevOps organisation and a Personal Access Token (&lt;em&gt;PAT&lt;/em&gt;) with which the provider can authenticate itself against Azure DevOps.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;PAT&lt;/em&gt; can be easily created via the UI of Azure DevOps by creating a new token via &lt;code&gt;User Settings --&amp;gt; Personal Access Token --&amp;gt; New Token&lt;/code&gt;. For the sake of simplicity, in this example I give “Full Access” to it…of course this should be adapted for your own purposes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5aMYAFQD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/pat.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5aMYAFQD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/pat.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Create a personal access token&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The documentation of the Terraform Provider contains information about the permissions needed for the respective resource.&lt;/p&gt;

&lt;p&gt;&lt;cite&gt;Defining relevant scopes&lt;/cite&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once the access token has been created, the Azure DevOps provider can be referenced in the terraform script as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"azuredevops"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;gt;= 0.0.1"&lt;/span&gt;
  &lt;span class="nx"&gt;org_service_url&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;orgurl&lt;/span&gt;
  &lt;span class="nx"&gt;personal_access_token&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pat&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The two variables &lt;code&gt;orgurl&lt;/code&gt; and &lt;code&gt;pat&lt;/code&gt; should be exposed as environment variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;export &lt;/span&gt;TF_VAR_orgurl &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://dev.azure.com/&amp;lt;ORG_NAME&amp;gt;"&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;export &lt;/span&gt;TF_VAR_pat &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;PAT_AUS_AZDEVOPS&amp;gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So, this is basically all that is needed to work with Terraform and Azure DevOps. Let’s start by creating a new project and a git repository. Two resources are needed for this, &lt;a href="https://www.terraform.io/docs/providers/azuredevops/r/project.html"&gt;azuredevops_project&lt;/a&gt; and &lt;a href="https://www.terraform.io/docs/providers/azuredevops/r/azure_git_repository.html"&gt;azuredevops_git_repository&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azuredevops_project"&lt;/span&gt; &lt;span class="s2"&gt;"project"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project_name&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Terraform DevOps Project"&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Sample project to demonstrate AzDevOps &amp;lt;-&amp;gt; Terraform integragtion"&lt;/span&gt;
  &lt;span class="nx"&gt;visibility&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"private"&lt;/span&gt;
  &lt;span class="nx"&gt;version_control&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Git"&lt;/span&gt;
  &lt;span class="nx"&gt;work_item_template&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Agile"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azuredevops_git_repository"&lt;/span&gt; &lt;span class="s2"&gt;"repo"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuredevops_project&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Sample Empty Git Repository"&lt;/span&gt;

  &lt;span class="nx"&gt;initialization&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;init_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Clean"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Additionally, we also need an initial pipeline that will be triggered on a &lt;em&gt;git push&lt;/em&gt; to &lt;code&gt;master&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
In a pipeline, you usually work with variables that come from different sources. These can be pipeline variables, values from a variable group or from external sources such as an Azure KeyVault. The first, simple build definition uses pipeline variables (&lt;code&gt;mypipelinevar&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azuredevops_build_definition"&lt;/span&gt; &lt;span class="s2"&gt;"build"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuredevops_project&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Sample Build Pipeline"&lt;/span&gt;

  &lt;span class="nx"&gt;ci_trigger&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;use_yaml&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;repository&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;repo_type&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"TfsGit"&lt;/span&gt;
    &lt;span class="nx"&gt;repo_id&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuredevops_git_repository&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;repo&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
    &lt;span class="nx"&gt;branch_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuredevops_git_repository&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;repo&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;default_branch&lt;/span&gt;
    &lt;span class="nx"&gt;yml_path&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"azure-pipeline.yaml"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"mypipelinevar"&lt;/span&gt;
    &lt;span class="nx"&gt;value&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Hello From Az DevOps Pipeline!"&lt;/span&gt;
    &lt;span class="nx"&gt;is_secret&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The corresponding pipeline definition looks as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;trigger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;master&lt;/span&gt;

&lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;vmImage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ubuntu-latest'&lt;/span&gt;

&lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo Hello, world!&lt;/span&gt;
  &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Run&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;a&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;one-line&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;script'&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;echo Pipeline is running!&lt;/span&gt;
    &lt;span class="s"&gt;echo And here is the value of our pipeline variable&lt;/span&gt;
    &lt;span class="s"&gt;echo $(mypipelinevar)&lt;/span&gt;
  &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Run&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;a&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;multi-line&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;script'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The pipeline just executes some scripts – for demo purposes – and outputs the variable &lt;strong&gt;stored in the definition to the console&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Running the Terraform script, it creates an Azure DevOps project, a git repository and a build definition.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WWnpHp2s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/azdevops_project.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WWnpHp2s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/azdevops_project.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Azure DevOps Project&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JMYupkmz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/git_repo.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JMYupkmz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/git_repo.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Git Repository&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bPqIiO40--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/pipeline_create.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bPqIiO40--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/pipeline_create.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Pipeline&lt;/p&gt;

&lt;p&gt;As soon as the file &lt;code&gt;azure_pipeline.yaml&lt;/code&gt; discussed above is pushed into the repo, the corresponding pipeline is triggered and the results can be found in the respective build step:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DWMxkTI0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/pipeline_running.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DWMxkTI0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/pipeline_running.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Running pipeline&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vV-_XMm4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/pipeline_var.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vV-_XMm4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/pipeline_var.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Output of build pipeline&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Example 2: Using variable groups&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Normally, variables are not directly stored in a pipeline definition, but rather put into &lt;a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/library/variable-groups?view=azure-devops&amp;amp;tabs=yaml"&gt;Azure DevOps variable groups&lt;/a&gt;. This allows you to store individual variables centrally in Azure DevOps and then reference and use them in different pipelines.&lt;/p&gt;

&lt;p&gt;Fortunately, variable groups can also be created using Terraform. For this purpose, the resource &lt;a href="https://www.terraform.io/docs/providers/azuredevops/r/variable_group.html"&gt;azuredevops_variable_group&lt;/a&gt; is used. In our script this looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azuredevops_variable_group"&lt;/span&gt; &lt;span class="s2"&gt;"vars"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project_id&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuredevops_project&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-variable-group"&lt;/span&gt;
  &lt;span class="nx"&gt;allow_access&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"var1"&lt;/span&gt;
    &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"value1"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"var2"&lt;/span&gt;
    &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"value2"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azuredevops_build_definition"&lt;/span&gt; &lt;span class="s2"&gt;"buildwithgroup"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuredevops_project&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Sample Build Pipeline with VarGroup"&lt;/span&gt;

  &lt;span class="nx"&gt;ci_trigger&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;use_yaml&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;variable_groups&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nx"&gt;azuredevops_variable_group&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vars&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;repository&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;repo_type&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"TfsGit"&lt;/span&gt;
    &lt;span class="nx"&gt;repo_id&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuredevops_git_repository&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;repo&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
    &lt;span class="nx"&gt;branch_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuredevops_git_repository&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;repo&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;default_branch&lt;/span&gt;
    &lt;span class="nx"&gt;yml_path&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"azure-pipeline-with-vargroup.yaml"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The first part of the terraform script creates the variable group in Azure DevOps (name: &lt;code&gt;my-variable-group&lt;/code&gt;) including two variables (&lt;code&gt;var1&lt;/code&gt; and &lt;code&gt;var2&lt;/code&gt;), the second part – a build definition – uses the variable group, so that the variables can be accessed in the corresponding pipeline file (&lt;code&gt;azure-pipeline-with-vargroup.yaml&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;It has the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;trigger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;master&lt;/span&gt;

&lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;vmImage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ubuntu-latest'&lt;/span&gt;

&lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-variable-group&lt;/span&gt;

&lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo Hello, world!&lt;/span&gt;
  &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Run&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;a&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;one-line&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;script'&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;echo Var1: $(var1)&lt;/span&gt;
    &lt;span class="s"&gt;echo Var2: $(var2)&lt;/span&gt;
  &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Run&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;a&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;multi-line&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;script'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you run the Terraform script, the corresponding Azure DevOps resources will be created: a variable group and a pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eHBe6Hzl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/library_vargroup.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eHBe6Hzl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/library_vargroup.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Variable Group&lt;/p&gt;

&lt;p&gt;If you pusht the build YAML file to the repo, the pipeline will be executed and you should see the values of the two variables as output on the build console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Jx5rsHSt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/pipeline_vargroup.png%3Fw%3D730" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Jx5rsHSt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/pipeline_vargroup.png%3Fw%3D730" alt=""&gt;&lt;/a&gt;Output of the variables from the variable group&lt;/p&gt;

&lt;h2&gt;
  
  
  Example 3: Using an Azure KeyVault and Azure DevOps Service Connections
&lt;/h2&gt;

&lt;p&gt;For security reasons, critical values are neither stored directly in a pipeline definition nor in Azure DevOps variable groups. You would normally use an external vault like &lt;em&gt;Azure KeyVault&lt;/em&gt;. Fortunately, with Azure DevOps you have the possibility to access an existing Azure KeyVault directly and access secrets which are then made available as variables within your build pipeline.&lt;/p&gt;

&lt;p&gt;Of course, Azure DevOps must be authenticated/authorized against Azure for this. Azure DevOps uses the concept of &lt;a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops&amp;amp;tabs=yaml"&gt;service connections&lt;/a&gt; for this purpose. Service connections are used to access e.g. Bitbucket, GitHub, Jira, Jenkis…or – as in our case – Azure. You define a user – for Azure this is a &lt;em&gt;service principal&lt;/em&gt; – which is used by DevOps pipelines to perform various tasks – in our example fetching a secret from a KeyVault.&lt;/p&gt;

&lt;p&gt;To demonstrate this scenario, various things must first be set up on Azure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating an application / service principal in the Azure Active Directory, which is used by Azure DevOps for authentication&lt;/li&gt;
&lt;li&gt;Creation of an Azure KeyVault (including a resource group)&lt;/li&gt;
&lt;li&gt;Authorizing the service principal to the Azure KeyVault to be able to read &lt;code&gt;secrets&lt;/code&gt; (no write access!)&lt;/li&gt;
&lt;li&gt;Creating a secret that will be used in a variable group / pipeline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the &lt;a href="https://www.terraform.io/docs/providers/azurerm/index.html"&gt;Azure Provider&lt;/a&gt;, Terraform offers the possibility to manage Azure services. We will be using it to create the resources mentioned above.&lt;/p&gt;

&lt;h3&gt;
  
  
  AAD Application + Service Principal
&lt;/h3&gt;

&lt;p&gt;First of all, we need a service principal that can be used by Azure DevOps to authenticate against Azure. The corresponding Terraform script looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_client_config"&lt;/span&gt; &lt;span class="s2"&gt;"current"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"azurerm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 2.6.0"&lt;/span&gt;
  &lt;span class="nx"&gt;features&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;key_vault&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;purge_soft_delete_on_destroy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;## Service Principal for DevOps&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azuread_application"&lt;/span&gt; &lt;span class="s2"&gt;"azdevopssp"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"azdevopsterraform"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"random_string"&lt;/span&gt; &lt;span class="s2"&gt;"password"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;length&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;24&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azuread_service_principal"&lt;/span&gt; &lt;span class="s2"&gt;"azdevopssp"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;application_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuread_application&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azdevopssp&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;application_id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azuread_service_principal_password"&lt;/span&gt; &lt;span class="s2"&gt;"azdevopssp"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;service_principal_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuread_service_principal&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azdevopssp&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;random_string&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;
  &lt;span class="nx"&gt;end_date&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2024-12-31T00:00:00Z"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_role_assignment"&lt;/span&gt; &lt;span class="s2"&gt;"contributor"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;principal_id&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuread_service_principal&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azdevopssp&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;scope&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/subscriptions/${data.azurerm_client_config.current.subscription_id}"&lt;/span&gt;
  &lt;span class="nx"&gt;role_definition_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Contributor"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;With the script shown above, both an AAD Application and a service principal are generated. Please note that the service principal is assigned the role &lt;code&gt;Contributor&lt;/code&gt; – on subscription level, see the &lt;code&gt;scope&lt;/code&gt; assignment. This should be restricted accordingly in your own projects (e.g. to the respective resource group)!&lt;/p&gt;

&lt;h3&gt;
  
  
  Azure KeyVault
&lt;/h3&gt;

&lt;p&gt;The KeyVault is created the same way as the previous resources. It is important to note that the user working against Azure is given full access to the secrets in the KeyVault. Further down in the script, permissions for the Azure DevOps service principal are also granted within the KeyVault – but in that case only &lt;strong&gt;read permissions&lt;/strong&gt;! Last but not least, a corresponding secret called &lt;code&gt;kvmysupersecretsecret&lt;/code&gt; is created, which we can use to test the integration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_resource_group"&lt;/span&gt; &lt;span class="s2"&gt;"rg"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"myazdevops-rg"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"westeurope"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_key_vault"&lt;/span&gt; &lt;span class="s2"&gt;"keyvault"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"myazdevopskv"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;                    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"westeurope"&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;enabled_for_disk_encryption&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;tenant_id&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azurerm_client_config&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tenant_id&lt;/span&gt;
  &lt;span class="nx"&gt;soft_delete_enabled&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;purge_protection_enabled&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;

  &lt;span class="nx"&gt;sku_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"standard"&lt;/span&gt;

  &lt;span class="nx"&gt;access_policy&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;tenant_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azurerm_client_config&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tenant_id&lt;/span&gt;
    &lt;span class="nx"&gt;object_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azurerm_client_config&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;object_id&lt;/span&gt;

    &lt;span class="nx"&gt;secret_permissions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="s2"&gt;"backup"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"get"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"list"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"purge"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"recover"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"restore"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"set"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"delete"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;certificate_permissions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;key_permissions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;## Grant DevOps SP permissions&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_key_vault_access_policy"&lt;/span&gt; &lt;span class="s2"&gt;"azdevopssp"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;key_vault_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_key_vault&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;keyvault&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;

  &lt;span class="nx"&gt;tenant_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azurerm_client_config&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tenant_id&lt;/span&gt;
  &lt;span class="nx"&gt;object_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuread_service_principal&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azdevopssp&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;object_id&lt;/span&gt;

  &lt;span class="nx"&gt;secret_permissions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"get"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"list"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;## Create a secret&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_key_vault_secret"&lt;/span&gt; &lt;span class="s2"&gt;"mysecret"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;key_vault_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_key_vault&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;keyvault&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kvmysupersecretsecret"&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"KeyVault for the Win!"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you have followed the steps described above, the result in Azure is a newly created KeyVault containing one secret:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kyDdkptU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/keyvault_portal.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kyDdkptU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/keyvault_portal.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Azure KeyVault&lt;/p&gt;

&lt;h3&gt;
  
  
  Service Connection
&lt;/h3&gt;

&lt;p&gt;Now, we need the integration into Azure DevOps, because we finally want to access the newly created secret in a pipeline. Azure DevOps is “by nature” able to access a KeyVault and the secrets it contains. To do this, however, you have to perform some manual steps – when not using Terraform – to enable access to Azure. Fortunately, these can now be automated with Terraform. The following resources are used to create a service connection to Azure in Azure DevOps and to grant access to our project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;## Service Connection&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azuredevops_serviceendpoint_azurerm"&lt;/span&gt; &lt;span class="s2"&gt;"endpointazure"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project_id&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuredevops_project&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;service_endpoint_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AzureRMConnection"&lt;/span&gt;
  &lt;span class="nx"&gt;credentials&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;serviceprincipalid&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuread_service_principal&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azdevopssp&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;application_id&lt;/span&gt;
    &lt;span class="nx"&gt;serviceprincipalkey&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;random_string&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;azurerm_spn_tenantid&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azurerm_client_config&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tenant_id&lt;/span&gt;
  &lt;span class="nx"&gt;azurerm_subscription_id&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;azurerm_client_config&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;subscription_id&lt;/span&gt;
  &lt;span class="nx"&gt;azurerm_subscription_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"dechrist - Microsoft Azure Internal Consumption"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;## Grant permission to use service connection&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azuredevops_resource_authorization"&lt;/span&gt; &lt;span class="s2"&gt;"auth"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project_id&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuredevops_project&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;resource_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuredevops_serviceendpoint_azurerm&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;endpointazure&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;authorized&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; 
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Cqyf-Xlf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/azure_service_connection.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Cqyf-Xlf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/azure_service_connection.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Service Connection&lt;/p&gt;

&lt;h3&gt;
  
  
  Creation of an Azure DevOps variable group and pipeline definition
&lt;/h3&gt;

&lt;p&gt;The last step necessary to use the KeyVault in a pipeline is to create a corresponding variable group and “link” the existing secret.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;## Pipeline with access to kv secret&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azuredevops_variable_group"&lt;/span&gt; &lt;span class="s2"&gt;"kvintegratedvargroup"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project_id&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuredevops_project&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kvintegratedvargroup"&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"KeyVault integrated Variable Group"&lt;/span&gt;
  &lt;span class="nx"&gt;allow_access&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;key_vault&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_key_vault&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;keyvault&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
    &lt;span class="nx"&gt;service_endpoint_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuredevops_serviceendpoint_azurerm&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;endpointazure&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kvmysupersecretsecret"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TkZL9I3s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/keyvault_vargroup_detail.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TkZL9I3s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/keyvault_vargroup_detail.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Variable Group with KeyVault integration&lt;/p&gt;

&lt;h3&gt;
  
  
  Test Pipeline
&lt;/h3&gt;

&lt;p&gt;All prerequisites are now in place, but we still need a pipeline with which we can test the scenario.&lt;/p&gt;

&lt;p&gt;Script for the creation of the pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azuredevops_build_definition"&lt;/span&gt; &lt;span class="s2"&gt;"buildwithkeyvault"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuredevops_project&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Sample Build Pipeline with KeyVault Integration"&lt;/span&gt;

  &lt;span class="nx"&gt;ci_trigger&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;use_yaml&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;variable_groups&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nx"&gt;azuredevops_variable_group&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kvintegratedvargroup&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;repository&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;repo_type&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"TfsGit"&lt;/span&gt;
    &lt;span class="nx"&gt;repo_id&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuredevops_git_repository&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;repo&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
    &lt;span class="nx"&gt;branch_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azuredevops_git_repository&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;repo&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;default_branch&lt;/span&gt;
    &lt;span class="nx"&gt;yml_path&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"azure-pipeline-with-keyvault.yaml"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Pipeline definition (&lt;code&gt;azure-pipeline-with-keyvault.yaml&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;trigger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;master&lt;/span&gt;

&lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;vmImage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ubuntu-latest'&lt;/span&gt;

&lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kvintegratedvargroup&lt;/span&gt;

&lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo Hello, world!&lt;/span&gt;
  &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Run&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;a&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;one-line&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;script'&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;echo KeyVault secret value: $(kvmysupersecretsecret)&lt;/span&gt;
  &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Run&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;a&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;multi-line&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;script'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you have run the Terraform script and pushed the pipeline file into the repo, you will get the following result in the next build (the secret is &lt;em&gt;not&lt;/em&gt; shown in the console for security reasons, of course!):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sPWNbzbH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/keyvault_pipeline_finished.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sPWNbzbH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/08/keyvault_pipeline_finished.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Output: KeyVault integrated variable group&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-Up
&lt;/h2&gt;

&lt;p&gt;Setting up new Azure DevOps projects was not always the easiest task, as sometimes manual steps were required. With the release of the first Terraform provider version for Azure DevOps, this has changed almost dramatically :) You can now – as one of the last building blocks for automation in a dev project – create many things via Terraform in Azure DevOps. In the example shown here, the access to an Azure KeyVault including the creation of the corresponding service connection could be achieved. However, only one module was shown here – frankly, one for a task that “annoyed” me every now and then, as most of it had to be set up manually before having a Terraform provider. The provider can also manage branch policies, set up groups and group memberships etc. With this first release you are still “at the beginning of the journey”, but in my opinion, it is a “very good start” with which you can achieve a lot.&lt;/p&gt;

&lt;p&gt;I am curious what will be supported next!&lt;/p&gt;

&lt;p&gt;Sample files can be found here: &lt;a href="https://gist.github.com/cdennig/4866a74b341a0079b5a59052fa735dbc"&gt;https://gist.github.com/cdennig/4866a74b341a0079b5a59052fa735dbc&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>azure</category>
      <category>devops</category>
      <category>iac</category>
    </item>
    <item>
      <title>Release to Kubernetes like a Pro with Flagger</title>
      <dc:creator>Christian Dennig</dc:creator>
      <pubDate>Wed, 08 Jul 2020 13:08:16 +0000</pubDate>
      <link>https://forem.com/cdennig/release-to-kubernetes-like-a-pro-with-flagger-1igf</link>
      <guid>https://forem.com/cdennig/release-to-kubernetes-like-a-pro-with-flagger-1igf</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;When it comes to running applications on Kubernetes in production, you will sooner or later have the challenge to update your services with a minimum amount of downtime for your users…and – at least as important – to be able to release new versions of your application with confidence…that means, you discover unhealthy and “faulty” services very quickly and are able to rollback to previous versions without much effort.&lt;/p&gt;

&lt;p&gt;When you search the internet for best practices or Kubernetes addons that help you with these challenges, you will stumble upon &lt;em&gt;&lt;a href="https://www.weave.works/oss/flagger/"&gt;Flagger&lt;/a&gt;&lt;/em&gt;, as I did, from &lt;a href="https://www.weave.works/"&gt;WeaveWorks&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Flagger&lt;/em&gt; is basically a controller that will be installed in your Kubernetes cluster. It helps you with canary and A/B releases of your services by handling all the hard stuff like automatically adding services and deployments for your “canaries”, shifting load over time to these and rolling back deployments in case of errors.&lt;/p&gt;

&lt;p&gt;As if that wasn’t good enough, &lt;em&gt;Flagger&lt;/em&gt; also works in combination with popular Service Meshes like &lt;em&gt;Istio&lt;/em&gt; and &lt;em&gt;Linkerd&lt;/em&gt;. If you don’t want to use &lt;em&gt;Flagger&lt;/em&gt; with such a product, you can also use it on “plain” Kubernetes, e.g. in combination with an &lt;em&gt;NGINX&lt;/em&gt; ingress controller. Many choices here…&lt;/p&gt;

&lt;p&gt;I like &lt;em&gt;linkerd&lt;/em&gt; very much, so I’ll choose that one in combination with Flagger to demonstrate a few of the possibilities you have when releasing new versions of your application/services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;h3&gt;
  
  
  linkerd
&lt;/h3&gt;

&lt;p&gt;I already set up a plain Kubernetes cluster on Azure for this sample, so I’ll start by adding &lt;em&gt;linkerd&lt;/em&gt; to it (you can find a complete guide how to install &lt;em&gt;linkerd&lt;/em&gt; and the CLI on &lt;a href="https://linkerd.io/2/getting-started/"&gt;https://linkerd.io/2/getting-started/&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;linkerd &lt;span class="nb"&gt;install&lt;/span&gt; | kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; -
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After the command has finished, let’s check if everything works as expected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;linkerd check &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; linkerd get deployments
&lt;span class="o"&gt;[&lt;/span&gt;...]
&lt;span class="o"&gt;[&lt;/span&gt;...]
control-plane-version
&lt;span class="nt"&gt;---------------------&lt;/span&gt;
√ control plane is up-to-date
√ control plane and cli versions match

Status check results are √
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
flagger                  1/1     1            1           3h12m
linkerd-controller       1/1     1            1           3h14m
linkerd-destination      1/1     1            1           3h14m
linkerd-grafana          1/1     1            1           3h14m
linkerd-identity         1/1     1            1           3h14m
linkerd-prometheus       1/1     1            1           3h14m
linkerd-proxy-injector   1/1     1            1           3h14m
linkerd-sp-validator     1/1     1            1           3h14m
linkerd-tap              1/1     1            1           3h14m
linkerd-web              1/1     1            1           3h14m
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you want to, open the linkerd dashboard and see the current state of your service mesh, execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;linkerd dashboard
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After a few seconds, the dashboard will be shown in your browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--i_-jiue1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/01/screenshot-2020-01-07-at-11.49.19.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i_-jiue1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/01/screenshot-2020-01-07-at-11.49.19.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Microsoft Teams Integration
&lt;/h3&gt;

&lt;p&gt;For alerting and notification, we want to leverage the MS Teams integration of &lt;em&gt;Flagger&lt;/em&gt; to get notified each time a new deployment is triggered or a canary release will be “promoted” to be the primary release.&lt;/p&gt;

&lt;p&gt;Therefore, we need to setup a WebHook in MS Teams (a MS Teams channel!):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In Teams, choose  &lt;strong&gt;More options&lt;/strong&gt;  ( &lt;strong&gt;⋯&lt;/strong&gt; ) next to the channel name you want to use and then choose  &lt;strong&gt;Connectors&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Scroll through the list of Connectors to  &lt;strong&gt;Incoming Webhook&lt;/strong&gt; , and choose  &lt;strong&gt;Add&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Enter a name for the webhook, upload an image and choose  &lt;strong&gt;Create&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Copy the &lt;strong&gt;webhook URL&lt;/strong&gt;. You’ll need it when adding Flagger in the next section.&lt;/li&gt;
&lt;li&gt;Choose  &lt;strong&gt;Done&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RLd6d_X_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/01/screenshot-2020-01-08-at-21.05.47.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RLd6d_X_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/01/screenshot-2020-01-08-at-21.05.47.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install Flagger
&lt;/h2&gt;

&lt;p&gt;Time to add Flagger to your cluster. Therefore, we will be using Helm (version 3, so no need for a Tiller deployment upfront).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;helm repo add flagger https://flagger.app

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml

&lt;span class="o"&gt;[&lt;/span&gt;...]

&lt;span class="nv"&gt;$ &lt;/span&gt;helm upgrade &lt;span class="nt"&gt;-i&lt;/span&gt; flagger flagger/flagger &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;linkerd &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--set&lt;/span&gt; crd.create&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;meshProvider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;linkerd &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;metricsServer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://linkerd-prometheus:9090 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--set&lt;/span&gt; msteams.url&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;YOUR_TEAMS_WEBHOOK_URL&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Check, if everything has been installed correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; linkerd &lt;span class="nt"&gt;-l&lt;/span&gt; app.kubernetes.io/instance&lt;span class="o"&gt;=&lt;/span&gt;flagger

NAME                       READY   STATUS    RESTARTS   AGE
flagger-7df95884bc-tpc5b   1/1     Running   0          0h3m
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Great, looks good. So, now that &lt;em&gt;Flagger&lt;/em&gt; has been installed, let’s have a look where it will help us and what kind of objects will be created for canary analysis and promotion. Remember that we use &lt;em&gt;linkerd&lt;/em&gt; in that sample, so all objects and features discussed in the following section will just be relevant for &lt;em&gt;linkerd&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Flagger works
&lt;/h2&gt;

&lt;p&gt;The sample application we will be deploying shortly consists of a VueJS Single Page Application that is able to display quotes from the Star Wars movies – and it’s able to request the quotes in a loop (to be able to put some load on the service). When requesting a quote, the web application is talking to a service (&lt;em&gt;proxy&lt;/em&gt;) within the Kubernetes cluster which in turn talks to another service (&lt;em&gt;quotesbackend&lt;/em&gt;) that is responsible to create the quote (simulating service-to-service calls in the cluster). The SPA as well as the proxy are accessible through a NGINX ingress controller.&lt;/p&gt;

&lt;p&gt;After the application has been successfully deployed, we also add a canary object which takes care of the promotion of a new revision of our backend deployment. The &lt;em&gt;Canary&lt;/em&gt; object will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flagger.app/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Canary&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quotesbackend&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;targetRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quotesbackend&lt;/span&gt;
  &lt;span class="na"&gt;progressDeadlineSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;60&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
  &lt;span class="na"&gt;analysis&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;20s&lt;/span&gt;
    &lt;span class="c1"&gt;# max number of failed metric checks before rollback&lt;/span&gt;
    &lt;span class="na"&gt;threshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
    &lt;span class="c1"&gt;# max traffic percentage routed to canary&lt;/span&gt;
    &lt;span class="c1"&gt;# percentage (0-100)&lt;/span&gt;
    &lt;span class="na"&gt;maxWeight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;70&lt;/span&gt;
    &lt;span class="na"&gt;stepWeight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
    &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;request-success-rate&lt;/span&gt;
      &lt;span class="c1"&gt;# minimum req success rate (non 5xx responses)&lt;/span&gt;
      &lt;span class="c1"&gt;# percentage (0-100)&lt;/span&gt;
      &lt;span class="na"&gt;threshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;99&lt;/span&gt;
      &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1m&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;request-duration&lt;/span&gt;
      &lt;span class="c1"&gt;# maximum req duration P99&lt;/span&gt;
      &lt;span class="c1"&gt;# milliseconds&lt;/span&gt;
      &lt;span class="na"&gt;threshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;500&lt;/span&gt;
      &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30s&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;What this configuration basically does is watching for new revisions of a &lt;em&gt;quotesbackend&lt;/em&gt; deployment. In case that happens, it starts a canary deployment for it. Every 20s, it will increase the weight of the traffic split by 10% until it reaches 70%. If no errors occur during the promotion, the new revision will be scaled up to 100% and the old version will be scaled down to zero, making the &lt;em&gt;canary&lt;/em&gt; the new &lt;em&gt;primary&lt;/em&gt;. Flagger will monitor the &lt;strong&gt;request success rate&lt;/strong&gt; and the &lt;strong&gt;request duration&lt;/strong&gt; (&lt;em&gt;linkerd&lt;/em&gt; Prometheus metrics). If one of them drops under the threshold set in the &lt;em&gt;Canary&lt;/em&gt; object, a rollback to the old version will be started and the new deployment will be scaled back to zero pods.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EP3FoKUF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/01/canary.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EP3FoKUF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/01/canary.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To achieve all of the above mentioned analysis, &lt;em&gt;flagger&lt;/em&gt; will create several new objects for us:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;backend-primary &lt;em&gt;deployment&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;backend-primary &lt;em&gt;service&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;backend-canary &lt;em&gt;service&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;SMI / linkerd &lt;em&gt;traffic split&lt;/em&gt; configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The resulting architecture will look like that:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XZMwrux1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/01/20200107_112344180_ios.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XZMwrux1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/01/20200107_112344180_ios.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, enough of theory, let’s see how Flagger works with the sample app mentioned above.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sample App Deployment
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;If you want to follow the sample on your machine, you can find all the code snippets, deployment manifests etc. on &lt;a href="https://github.com/cdennig/flagger-linkerd-canary"&gt;https://github.com/cdennig/flagger-linkerd-canary&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;First, we will deploy the application in a basic version. This includes the &lt;em&gt;backend&lt;/em&gt; and &lt;em&gt;frontend&lt;/em&gt; components as well as an Ingress Controller which we can use to route traffic into the cluster (to the SPA app + backend services). We will be using the NGINX ingress controller for that.&lt;/p&gt;

&lt;p&gt;To get started, let’s create a namespace for the application and deploy the ingress controller:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create ns quotes

&lt;span class="c"&gt;# Enable linkerd integration with the namespace&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl annotate ns quotes linkerd.io/inject&lt;span class="o"&gt;=&lt;/span&gt;enabled

&lt;span class="c"&gt;# Deploy ingress controller&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create ns ingress
&lt;span class="nv"&gt;$ &lt;/span&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;my-ingress ingress-nginx/ingress-nginx &lt;span class="nt"&gt;-n&lt;/span&gt; ingress
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Please note&lt;/strong&gt; , that we annotate the &lt;em&gt;quotes&lt;/em&gt; namespace to automatically get the &lt;em&gt;Linkerd&lt;/em&gt; sidecar injected during deployment time. Any pod that will be created within this namespace, will be part of the service mesh and controlled via Linkerd.&lt;/p&gt;

&lt;p&gt;As soon as the first part is finished, let’s get the &lt;strong&gt;public IP of the ingress controller&lt;/strong&gt;. We need this IP address to configure the endpoint to call for the VueJS app, which in turn is configured in a file called &lt;code&gt;settings.js&lt;/code&gt; of the frontend/Single Page Application pod. This file will be referenced when the index.html page gets loaded. The file itself is not present in the Docker &lt;strong&gt;image&lt;/strong&gt;. We mount it during deployment time from a &lt;em&gt;Kubernetes secret&lt;/em&gt; to the appropriate location within the running container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One more thing&lt;/strong&gt; : To have a proper DNS name to call our service (instead of using the plain IP), I chose to use &lt;a href="https://nip.io/"&gt;NIP.io&lt;/a&gt;. The service is dead simple! E.g. you can simply use the DNS name &lt;code&gt;123-456-789-123.nip.io&lt;/code&gt; and the service will resolve to host with IP &lt;code&gt;123.456.789.123&lt;/code&gt;. Nothing to configure, no more editing of /etc/hosts…&lt;/p&gt;

&lt;p&gt;So first, let’s determine the IP address of the ingress controller…&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# get the IP address of the ingress controller...&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; ingress
NAME                                            TYPE           CLUSTER-IP    EXTERNAL-IP    PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;                      AGE
my-ingress-ingress-nginx-controller             LoadBalancer   10.0.93.165   52.143.30.72   80:31347/TCP,443:31399/TCP   4d5h
my-ingress-ingress-nginx-controller-admission   ClusterIP      10.0.157.46   &amp;lt;none&amp;gt;         443/TCP                      4d5h

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Please open the file&lt;code&gt;settings_template.js&lt;/code&gt; and adjust the &lt;em&gt;endpoint&lt;/em&gt; property to point to the cluster (in this case, the IP address is &lt;em&gt;52.143.30.72&lt;/em&gt;, so the DNS name will be &lt;em&gt;52-143-30-72.nip.io&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DtvoWjMl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/06/settings.png%3Fw%3D1000" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DtvoWjMl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/06/settings.png%3Fw%3D1000" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we need to add the correspondig Kubernetes secret for the settings file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create secret generic uisettings &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;settings.js&lt;span class="o"&gt;=&lt;/span&gt;./settings_template.js &lt;span class="nt"&gt;-n&lt;/span&gt; quotes
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As mentioned above, this secret will be mounted to a special location in the running container. Here’s the deployment file for the frontend – please see the sections for &lt;em&gt;volumes&lt;/em&gt; and &lt;em&gt;volumeMounts&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quotesfrontend&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quotesfrontend&lt;/span&gt;
        &lt;span class="na"&gt;quotesapp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
        &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;minReadySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RollingUpdate&lt;/span&gt;
    &lt;span class="na"&gt;rollingUpdate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;maxSurge&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;maxUnavailable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quotesfrontend&lt;/span&gt;
        &lt;span class="na"&gt;quotesapp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
        &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quotesfrontend&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;csaocpger/quotesfrontend:4&lt;/span&gt;
        &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/usr/share/nginx/html/settings"&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;uisettings&lt;/span&gt;
            &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;uisettings&lt;/span&gt;
        &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;uisettings&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Last but not least, we also need to adjust the ingress definition to be able to work with the DNS / hostname. Open the file &lt;em&gt;ingress.yaml&lt;/em&gt; and adjust the hostnames for the two ingress definitions. In this case here, the resulting manifest looks like that:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NwE5sp3i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/06/ingress.png%3Fw%3D862" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NwE5sp3i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/06/ingress.png%3Fw%3D862" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we are set to deploy the whole application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; base-backend-infra.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; quotes
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; base-backend-app.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; quotes
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; base-frontend-app.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; quotes
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ingress.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; quotes
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After a few seconds, you should be able to point your browser to the hostname and see the “Quotes App”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d-ed3ffk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/06/1-base.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d-ed3ffk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/06/1-base.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Basic Quotes app&lt;/p&gt;

&lt;p&gt;If you click on the “Load new Quote” button, the SPA will call the backend (here: &lt;a href="http://52-143-30-72.nip.io/quotes"&gt;http://52-143-30-72.nip.io/quotes&lt;/a&gt;), request a new “Star Wars” quote and show the result of the API Call in the box at the bottom. You can also request quotes in a loop – we will need that later to simulate load.&lt;/p&gt;

&lt;h3&gt;
  
  
  Flagger Canary Settings
&lt;/h3&gt;

&lt;p&gt;We need to configure Flagger and make it aware of our deployment – remember, we only target the backend API that serves the quotes.&lt;/p&gt;

&lt;p&gt;Therefor, we deploy the canary configuration (&lt;em&gt;canary.yaml&lt;/em&gt; file) discussed before:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; canary.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; quotes
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You have to wait a few seconds and check the services, deployments and pods to see if it has been correctly installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; quotes

NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;    AGE
quotesbackend           ClusterIP   10.0.64.206    &amp;lt;none&amp;gt;        3000/TCP   51m
quotesbackend-canary    ClusterIP   10.0.94.94     &amp;lt;none&amp;gt;        3000/TCP   70s
quotesbackend-primary   ClusterIP   10.0.219.233   &amp;lt;none&amp;gt;        3000/TCP   70s
quotesfrontend          ClusterIP   10.0.111.86    &amp;lt;none&amp;gt;        80/TCP     12m
quotesproxy             ClusterIP   10.0.57.46     &amp;lt;none&amp;gt;        80/TCP     51m

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-n&lt;/span&gt; quotes
NAME                                     READY   STATUS    RESTARTS   AGE
quotesbackend-primary-7c6b58d7c9-l8sgc   2/2     Running   0          64s
quotesfrontend-858cd446f5-m6t97          2/2     Running   0          12m
quotesproxy-75fcc6b6c-6wmfr              2/2     Running   0          43m

kubectl get deploy &lt;span class="nt"&gt;-n&lt;/span&gt; quotes
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
quotesbackend           0/0     0            0           50m
quotesbackend-primary   1/1     1            1           64s
quotesfrontend          1/1     1            1           12m
quotesproxy             1/1     1            1           43m
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That looks good! Flagger has created new services, deployments and pods for us to be able to control how traffic will be directed to existing/new versions of our “quotes” backend. You can also check the canary definition in Kubernetes, if you want:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe canaries &lt;span class="nt"&gt;-n&lt;/span&gt; quotes

Name:         quotesbackend
Namespace:    quotes
Labels:       &amp;lt;none&amp;gt;
Annotations:  API Version:  flagger.app/v1beta1
Kind:         Canary
Metadata:
  Creation Timestamp:  2020-06-06T13:17:59Z
  Generation:          1
  Managed Fields:
    API Version:  flagger.app/v1beta1
&lt;span class="o"&gt;[&lt;/span&gt;...]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You will also receive a notification in Teams, that a new deployment for Flagger has been detected and initialized:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AfnrT6Cp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/07/1-canary.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AfnrT6Cp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/07/1-canary.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Kick-Off a new deployment
&lt;/h2&gt;

&lt;p&gt;Now comes the part where Flagger really shines. We want to deploy a new version of the backend quote API – switching from “Star &lt;strong&gt;Wars&lt;/strong&gt; ” quotes to “Star &lt;strong&gt;Trek&lt;/strong&gt; ” quotes! What will happen, is the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;as soon as we deploy a new “quotesbackend”, Flagger will recognize it&lt;/li&gt;
&lt;li&gt;new versions will be deployed, but no traffic will be directed to them at the beginning&lt;/li&gt;
&lt;li&gt;after some time, Flagger will start to redirect traffic via Linkerd / TrafficSplit configurations to the new version via the canary service, starting – according to our canary definition – at a rate of 10%. So 90% of the traffic will still hit our “Star Wars” quotes&lt;/li&gt;
&lt;li&gt;it will monitor the request success rate and advance the rate by 10% every 20 seconds&lt;/li&gt;
&lt;li&gt;if 70% traffic split will be reached without throwing any significant amount of errors, the deployment will be scaled up to 100% and propagated as the “new primary”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Before&lt;/strong&gt; we deploy it, let’s request new quotes in a loop (set the frequency e.g. to 300ms via the slider and press “Load in Loop”).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X3gEnbK4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/07/1-base.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X3gEnbK4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/07/1-base.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Base deployment: Load quotes in a loop.&lt;/p&gt;

&lt;p&gt;Then, deploy the new version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; st-backend-app.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; quotes

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe canaries quotesbackend &lt;span class="nt"&gt;-n&lt;/span&gt; quotes
&lt;span class="o"&gt;[&lt;/span&gt;...]
&lt;span class="o"&gt;[&lt;/span&gt;...]
Events:
  Type     Reason  Age                   From     Message
  &lt;span class="nt"&gt;----&lt;/span&gt;     &lt;span class="nt"&gt;------&lt;/span&gt;  &lt;span class="nt"&gt;----&lt;/span&gt;                  &lt;span class="nt"&gt;----&lt;/span&gt;     &lt;span class="nt"&gt;-------&lt;/span&gt;
  Warning  Synced  14m                   flagger  quotesbackend-primary.quotes not ready: waiting &lt;span class="k"&gt;for &lt;/span&gt;rollout to finish: observed deployment generation less &lt;span class="k"&gt;then &lt;/span&gt;desired generation
  Normal   Synced  14m                   flagger  Initialization &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt; quotesbackend.quotes
  Normal   Synced  4m7s                  flagger  New revision detected! Scaling up quotesbackend.quotes
  Normal   Synced  3m47s                 flagger  Starting canary analysis &lt;span class="k"&gt;for &lt;/span&gt;quotesbackend.quotes
  Normal   Synced  3m47s                 flagger  Advance quotesbackend.quotes canary weight 10
  Warning  Synced  3m7s &lt;span class="o"&gt;(&lt;/span&gt;x2 over 3m27s&lt;span class="o"&gt;)&lt;/span&gt;  flagger  Halt advancement no values found &lt;span class="k"&gt;for &lt;/span&gt;linkerd metric request-success-rate probably quotesbackend.quotes is not receiving traffic: running query failed: no values found
  Normal   Synced  2m47s                 flagger  Advance quotesbackend.quotes canary weight 20
  Normal   Synced  2m27s                 flagger  Advance quotesbackend.quotes canary weight 30
  Normal   Synced  2m7s                  flagger  Advance quotesbackend.quotes canary weight 40
  Normal   Synced  107s                  flagger  Advance quotesbackend.quotes canary weight 50
  Normal   Synced  87s                   flagger  Advance quotesbackend.quotes canary weight 60
  Normal   Synced  67s                   flagger  Advance quotesbackend.quotes canary weight 70
  Normal   Synced  7s &lt;span class="o"&gt;(&lt;/span&gt;x3 over 47s&lt;span class="o"&gt;)&lt;/span&gt;      flagger  &lt;span class="o"&gt;(&lt;/span&gt;combined from similar events&lt;span class="o"&gt;)&lt;/span&gt;: Promotion completed! Scaling down quotesbackend.quotes
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You will notice in the UI that every now and then a quote from “Star Trek” will appear…and that the frequency will increase every 20 seconds as the canary deployment will receive more traffic over time. As stated above, when the traffic split reaches 70% and no errors occured in the meantime, the “canary/new version” will be promoted as the “new primary version” of the quotes backend. At that time, you will only receive quotes from “Star Trek”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kGZg6Auz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/07/3-canary.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kGZg6Auz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/07/3-canary.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Canary deployment: new quotes backend servicing “Star Trek” quotes.&lt;/p&gt;

&lt;p&gt;Because of the Teams integration, we also get a notification of a new version that will be rolled-out and – after the promotion to “primary” – that the rollout has been successfully finished.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ba7iieqn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/07/2-canary.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ba7iieqn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/07/2-canary.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Starting a new version rollout with Flagger&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AJDoPaXZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/07/4-canary.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AJDoPaXZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/07/4-canary.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Finished rollout with Flagger&lt;/p&gt;

&lt;h3&gt;
  
  
  What happens when errors occur?
&lt;/h3&gt;

&lt;p&gt;So far, we have been following the “happy path”…but what happens, if there are errors during the rollout of a new canary version? Let’s say we have produced a bug in our new service that will throw an error when requesting a new quote from the backend? Let’s see, how &lt;em&gt;Flagger&lt;/em&gt; will behave then…&lt;/p&gt;

&lt;p&gt;The version that will be deployed will start throwing errors after a certain amount of time. Due to the fact that Flagger is monitoring the “request success rate” via Linkerd metrics, it will notice that something is “not the way it is supposed to be”, stop the promotion of the new “error-prone” version, scale it back to zero pods and keep the current primary backend (means: “Star Trek” quotes) in place.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; error-backend-app.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; quotes

&lt;span class="nv"&gt;$ &lt;/span&gt;k describe canaries.flagger.app quotesbackend  
&lt;span class="o"&gt;[&lt;/span&gt;...]
Events:
  Type     Reason  Age                    From     Message
  &lt;span class="nt"&gt;----&lt;/span&gt;     &lt;span class="nt"&gt;------&lt;/span&gt;  &lt;span class="nt"&gt;----&lt;/span&gt;                   &lt;span class="nt"&gt;----&lt;/span&gt;     &lt;span class="nt"&gt;-------&lt;/span&gt;
  Warning  Synced  23m                    flagger  quotesbackend-primary.quotes not ready: waiting &lt;span class="k"&gt;for &lt;/span&gt;rollout to finish: observed deployment generation less &lt;span class="k"&gt;then &lt;/span&gt;desired generation
  Normal   Synced  23m                    flagger  Initialization &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt; quotesbackend.quotes
  Normal   Synced  13m                    flagger  New revision detected! Scaling up quotesbackend.quotes
  Normal   Synced  11m                    flagger  Advance quotesbackend.quotes canary weight 20
  Normal   Synced  11m                    flagger  Advance quotesbackend.quotes canary weight 30
  Normal   Synced  11m                    flagger  Advance quotesbackend.quotes canary weight 40
  Normal   Synced  10m                    flagger  Advance quotesbackend.quotes canary weight 50
  Normal   Synced  10m                    flagger  Advance quotesbackend.quotes canary weight 60
  Normal   Synced  10m                    flagger  Advance quotesbackend.quotes canary weight 70
  Normal   Synced  3m43s &lt;span class="o"&gt;(&lt;/span&gt;x4 over 9m43s&lt;span class="o"&gt;)&lt;/span&gt;  flagger  &lt;span class="o"&gt;(&lt;/span&gt;combined from similar events&lt;span class="o"&gt;)&lt;/span&gt;: New revision detected! Scaling up quotesbackend.quotes
  Normal   Synced  3m23s &lt;span class="o"&gt;(&lt;/span&gt;x2 over 12m&lt;span class="o"&gt;)&lt;/span&gt;    flagger  Advance quotesbackend.quotes canary weight 10
  Normal   Synced  3m23s &lt;span class="o"&gt;(&lt;/span&gt;x2 over 12m&lt;span class="o"&gt;)&lt;/span&gt;    flagger  Starting canary analysis &lt;span class="k"&gt;for &lt;/span&gt;quotesbackend.quotes
  Warning  Synced  2m43s &lt;span class="o"&gt;(&lt;/span&gt;x4 over 12m&lt;span class="o"&gt;)&lt;/span&gt;    flagger  Halt advancement no values found &lt;span class="k"&gt;for &lt;/span&gt;linkerd metric request-success-rate probably quotesbackend.quotes is not receiving traffic: running query failed: no values found
  Warning  Synced  2m3s &lt;span class="o"&gt;(&lt;/span&gt;x2 over 2m23s&lt;span class="o"&gt;)&lt;/span&gt;   flagger  Halt quotesbackend.quotes advancement success rate 0.00% &amp;lt; 99%
  Warning  Synced  103s                   flagger  Halt quotesbackend.quotes advancement success rate 50.00% &amp;lt; 99%
  Warning  Synced  83s                    flagger  Rolling back quotesbackend.quotes failed checks threshold reached 5
  Warning  Synced  81s                    flagger  Canary failed! Scaling down quotesbackend.quotes

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As you can see in the event log, the success rate drops to a significant amount and Flagger will halt the promotion of the new version, scale down to zero pods and keep the current version a the “primary” backend.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vHzaXIdf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/07/6-error-canary.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vHzaXIdf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/07/6-error-canary.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;New backend version throwing errors&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uBwSK4uC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/07/7-error-canary.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uBwSK4uC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/07/7-error-canary.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Teams notification: service rollout stopped!&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With this article, I have certainly only covered the features of Flagger in a very brief way. But this small example shows what a great relief Flagger can be when it comes to the rollout of new Kubernetes deployments. Flagger can do a lot more than shown here and it is definitely worth to take a look at this product from WeaveWorks.&lt;/p&gt;

&lt;p&gt;I hope I could give some insight and made you want to do more…and to have fun with Flagger :)&lt;/p&gt;

&lt;p&gt;As mentioned above, all the sample files, manifests etc. can be found here: &lt;a href="https://github.com/cdennig/flagger-linkerd-canary"&gt;https://github.com/cdennig/flagger-linkerd-canary&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>deployment</category>
      <category>flagger</category>
      <category>kubernetes</category>
      <category>canary</category>
    </item>
    <item>
      <title>WSL2: Making Windows 10 the perfect dev machine!</title>
      <dc:creator>Christian Dennig</dc:creator>
      <pubDate>Fri, 05 Jun 2020 15:28:59 +0000</pubDate>
      <link>https://forem.com/cdennig/wsl2-making-windows-10-the-perfect-dev-machine-391k</link>
      <guid>https://forem.com/cdennig/wsl2-making-windows-10-the-perfect-dev-machine-391k</guid>
      <description>&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt; : I work at Microsoft. And you might think that this makes me a bit biased about the current topic. However, I was an enthusiastic MacOS / MacBook user – both privately and professionally. I work as a so-called “Cloud Solution Architect” in the area of Open Source / Cloud Native Application Development – i.e. everything concerning container technologies, Kubernetes etc. This means that almost every tool you have to deal with is Unix based – or at least, it only works perfectly on that platform. That’s why I early moved to the Apple ecosystem, because it makes your (dev) life so much easier – although you get some &lt;em&gt;not so serious&lt;/em&gt; comments on it at work every now and then :)&lt;/p&gt;

&lt;p&gt;Well, things have changed…&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this article I would like to describe how my current setup of tools / the environment looks like on my Windows 10 machine and how to setup the latest version of the Windows Subsystem for Linux 2 (WSL2) optimally – at least for me – when working in the “Cloud Native” domain.&lt;/p&gt;

&lt;p&gt;Long story short…let’s start!&lt;/p&gt;

&lt;h2&gt;
  
  
  Basics
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Windows Subsystem for Linux 2 (WSL2)
&lt;/h3&gt;

&lt;p&gt;The whole story begins with the installation of WSL2, which is now available with the current version of Windows (Windows 10, version 2004, build 19041 or higher). The Linux subsystem has been around for quite a while now, but it has never been really usable – at least this is the case for version 1 (in terms of performance, compatibility etc.).&lt;/p&gt;

&lt;p&gt;The bottom line is that WSL2 gives you the ability to run ELF64 Linux binaries on Windows – with 100% system call compatibility and “near-native” performance! The Linux kernel (optimized in size and performance for WSL2) is built by Microsoft from the latest stable branch based on the sources available on “kernel.org”. Updates of the kernel are provided via Windows Update.&lt;/p&gt;

&lt;p&gt;I won’t go into the details of the installation process as you can simply get WSL2 by following this tutorial: &lt;a href="https://docs.microsoft.com/en-us/windows/wsl/install-win10" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/windows/wsl/install-win10&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It comes down to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;installing the Subsystem for Linux&lt;/li&gt;
&lt;li&gt;enabling “Virtual Machine Platform”&lt;/li&gt;
&lt;li&gt;setting WSL2 as the default version&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next step is to install the distribution of your choice…&lt;/p&gt;

&lt;h3&gt;
  
  
  Install Ubuntu 20.04
&lt;/h3&gt;

&lt;p&gt;I decided to use Ubuntu 20.04 LTS, because I already know the distribution well and have used it for private purposes for some time – there are, of couse, others: Debian, openSUSE, Kali Linux etc. No matter which one you choose, the installation itself couldn’t be easier: all you have to do is open the Windows Store app, find the desired distribution and click “Install” (or simply click on this link for Ubuntu: &lt;a href="https://www.microsoft.com/store/apps/9n6svws3rx71" rel="noopener noreferrer"&gt;https://www.microsoft.com/store/apps/9n6svws3rx71&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2F1-ubuntu_install.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2F1-ubuntu_install.png%3Fw%3D1024"&gt;&lt;/a&gt;Windows Store Ubuntu 20.04 LTS&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2F2-ubuntu_install.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2F2-ubuntu_install.png%3Fw%3D1024"&gt;&lt;/a&gt;Ubuntu Installation&lt;/p&gt;

&lt;p&gt;Once it is installed, you have to check if “version 2” of the subsystem for Linux is used (we have set “version 2” as default, but just in case…). Therefor, open a &lt;strong&gt;Powershell&lt;/strong&gt; prompt and execute the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;C:\&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;wsl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--list&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--verbose&lt;/span&gt;&lt;span class="w"&gt;

  &lt;/span&gt;&lt;span class="n"&gt;NAME&lt;/span&gt;&lt;span class="w"&gt;                   &lt;/span&gt;&lt;span class="nx"&gt;STATE&lt;/span&gt;&lt;span class="w"&gt;           &lt;/span&gt;&lt;span class="nx"&gt;VERSION&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Ubuntu-20.04&lt;/span&gt;&lt;span class="w"&gt;           &lt;/span&gt;&lt;span class="nx"&gt;Running&lt;/span&gt;&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="nx"&gt;2&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;docker-desktop-data&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;Running&lt;/span&gt;&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="nx"&gt;2&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;docker-desktop&lt;/span&gt;&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="nx"&gt;Running&lt;/span&gt;&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="nx"&gt;2&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;Ubuntu-18.04&lt;/span&gt;&lt;span class="w"&gt;           &lt;/span&gt;&lt;span class="nx"&gt;Stopped&lt;/span&gt;&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="nx"&gt;2&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you see “Version 1” for Ubuntu-20.04, please run…&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;C:\&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;wsl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--set-version&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Ubuntu-20.04&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;2&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will convert the distribution to be able to run in WSL2 mode (grab yourself a coffee, the conversion takes some time ;)).&lt;/p&gt;

&lt;h3&gt;
  
  
  Windows Terminal
&lt;/h3&gt;

&lt;p&gt;Next, you need a modern, feature-rich and lightweight terminal. Fortunately, Microsoft also delivered on this: the &lt;a href="https://github.com/microsoft/terminal" rel="noopener noreferrer"&gt;Open Source Windows Terminal&lt;/a&gt;.  It includes many of the features most frequently requested by the Windows command-line community including support for tabs, rich text, globalization, configurability, theming &amp;amp; styling etc.&lt;/p&gt;

&lt;p&gt;The installation is also done via the Windows Store: &lt;a href="https://www.microsoft.com/store/productId/9N0DX20HK701" rel="noopener noreferrer"&gt;https://www.microsoft.com/store/productId/9N0DX20HK701&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once it’s on your machine, we can tweak the settings of the terminal to use Ubuntu 20.04 as the default terminal. Therefor, open Windows Terminal and hit “Ctrl+,” (that opens the settings.json file in your default text editor).&lt;/p&gt;

&lt;p&gt;Add the &lt;em&gt;guid&lt;/em&gt; of the Ubuntu 20.04 profile to the “defaultProfile” property:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2F2-1-tweak-term.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2F2-1-tweak-term.png%3Fw%3D1024"&gt;&lt;/a&gt;Default Profile&lt;/p&gt;

&lt;p&gt;Last but not least we upgrade all existing packages to be up to date.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt upgrade
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, the “basics” are in place now…we have a terminal that’s running Ubuntu Linux in Windows. Next, let’s give it super-powers!&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup / tweak the shell
&lt;/h2&gt;

&lt;p&gt;The software that is now being installed is an extract of &lt;strong&gt;what I need&lt;/strong&gt; for my daily work. Of course the selection differs from what you might want (although I think this will cover a lot someone in the “Cloud Native” space would install). Nevertheless, it was important for me to list almost everything here, because it basically also helps me if I have to set up the environment again in the future :)&lt;/p&gt;

&lt;h3&gt;
  
  
  SSH Keys
&lt;/h3&gt;

&lt;p&gt;Since it’s in the nature of a developer to work with GitHub (and other services, of course :)), I first need an SSH key to authenticate against the service. To do this, I create a new key (or copy an existing one to &lt;em&gt;~/.ssh/&lt;/em&gt;), which I then publish to GitHub (via their website).&lt;/p&gt;

&lt;p&gt;At the same time the key is added to &lt;em&gt;ssh-agent&lt;/em&gt;, so you don’t have to enter the corresponding keyphrase all the time when using it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ssh-keygen &lt;span class="nt"&gt;-t&lt;/span&gt; rsa &lt;span class="nt"&gt;-b&lt;/span&gt; 4096 &lt;span class="nt"&gt;-C&lt;/span&gt; &lt;span class="s2"&gt;"your_email@example.com"&lt;/span&gt;

&lt;span class="c"&gt;# start the ssh-agent in the background&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;eval&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;ssh-agent &lt;span class="nt"&gt;-s&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Agent pid 59566

&lt;span class="nv"&gt;$ &lt;/span&gt;ssh-add ~/.ssh/id_rsa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Oh My Zsh
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fomz.png%3Fw%3D337" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fomz.png%3Fw%3D337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now comes the best part :) To give the Ubuntu shell (which is &lt;em&gt;bash&lt;/em&gt; by default) real superpowers, I exchange it with &lt;em&gt;&lt;a href="https://www.zsh.org/" rel="noopener noreferrer"&gt;zsh&lt;/a&gt;_in combination with the awesome project _&lt;a href="https://github.com/ohmyzsh/ohmyzsh" rel="noopener noreferrer"&gt;Oh My Zsh&lt;/a&gt;&lt;/em&gt; (which provides hundreds of plugins, customizing options, tweaks etc. for it). &lt;em&gt;zsh&lt;/em&gt; is an extended &lt;em&gt;bash&lt;/em&gt; shell which has many improvements and extensions compared to &lt;em&gt;bash&lt;/em&gt;. Among other things, the shell can be themed, the command prompt adjusted, auto-completion can be used etc.&lt;/p&gt;

&lt;p&gt;So, let’s install both:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;git zsh &lt;span class="nt"&gt;-y&lt;/span&gt;

&lt;span class="c"&gt;# After the installation has finished, add OhMyZsh...&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When ready, &lt;em&gt;OhMyZsh&lt;/em&gt; can be customized via the &lt;em&gt;.zshrc&lt;/em&gt; file in you home directory (e.g. enable plugins, set the theme). Here are the settings I usually make:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adjust Theme&lt;/li&gt;
&lt;li&gt;Activate plugins&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s do this step by step…&lt;/p&gt;

&lt;h4&gt;
  
  
  Theme
&lt;/h4&gt;

&lt;p&gt;As theme, I use &lt;em&gt;powerlevel10k&lt;/em&gt; (great stuff!), which you can find &lt;a href="https://github.com/romkatv/powerlevel10k" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fperformance.gif%3Fw%3D884" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fperformance.gif%3Fw%3D884"&gt;&lt;/a&gt;Sample: &lt;em&gt;powerlevel10k&lt;/em&gt; (Source: &lt;a rel="noreferrer noopener" href="https://github.com/romkatv/powerlevel10k"&gt;&lt;/a&gt;&lt;a href="https://github.com/romkatv/powerlevel10k" rel="noopener noreferrer"&gt;https://github.com/romkatv/powerlevel10k&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;The installation is very easy by first cloning the repo to your local machine and then activating the theme in &lt;em&gt;~/.zshrc&lt;/em&gt; (variable &lt;em&gt;ZSH_THEME&lt;/em&gt;, see screenshot below):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH\_CUSTOM:-~/.oh-my-zsh/custom}/themes/powerlevel10k
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fzsh_theme.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fzsh_theme.png%3Fw%3D1024"&gt;&lt;/a&gt;Adjust theme to use&lt;/p&gt;

&lt;p&gt;The next time you open a new shell, a wizard guides you through all options of the theme and allows you to customize the look&amp;amp;feel of your terminal (if the wizard does not start automatically or you want to restart it, simply run &lt;code&gt;p10k configure&lt;/code&gt; from the command prompt).&lt;/p&gt;

&lt;p&gt;The wizard offers a lot of options. Just find the right setup for you, play around with it a bit and try out one or the other. My setup finally looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fsample.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fsample.png%3Fw%3D1024"&gt;&lt;/a&gt;My &lt;em&gt;powerlevel10k &lt;/em&gt;setup&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optional, but recommended&lt;/strong&gt;...install the corresponding fonts (and adjust the &lt;em&gt;settings.json&lt;/em&gt; of Windows Terminal to use these, see image below): &lt;a href="https://github.com/romkatv/powerlevel10k#meslo-nerd-font-patched-for-powerlevel10k" rel="noopener noreferrer"&gt;https://github.com/romkatv/powerlevel10k#meslo-nerd-font-patched-for-powerlevel10k&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Ffonts.png%3Fw%3D608" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Ffonts.png%3Fw%3D608"&gt;&lt;/a&gt;Window Terminal settings&lt;/p&gt;

&lt;h3&gt;
  
  
  Plugins
&lt;/h3&gt;

&lt;p&gt;In terms of OhMyZsh plugins, I use the following ones:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;git&lt;/strong&gt; (&lt;em&gt;git&lt;/em&gt; shortcuts, e.g. “&lt;em&gt;gp&lt;/em&gt;” for “&lt;em&gt;git pull&lt;/em&gt;“, “&lt;em&gt;gc&lt;/em&gt;” for “&lt;em&gt;git commit -v&lt;/em&gt;“)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;zsh-autosuggestions / zsh-completions&lt;/strong&gt; (command completion / suggestions)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kubectl&lt;/strong&gt; (&lt;em&gt;kubectl&lt;/em&gt; shortcuts / completion, e.g. “&lt;em&gt;kaf&lt;/em&gt;” for “&lt;em&gt;kubectl apply -f&lt;/em&gt;“, “&lt;em&gt;kgp&lt;/em&gt;” for “&lt;em&gt;kubectl get pods&lt;/em&gt;“, “&lt;em&gt;kgd&lt;/em&gt;” for “&lt;em&gt;kubectl get deployment&lt;/em&gt;” etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ssh-agent&lt;/strong&gt; (starts the ssh agent automatically on startup)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can simply add them by modifying &lt;em&gt;.zshrc&lt;/em&gt; in your home directory:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fzsh_plugins.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fzsh_plugins.png%3Fw%3D1024"&gt;&lt;/a&gt;Activate oh-my-zsh plugins in &lt;em&gt;.zshrc&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Tools
&lt;/h2&gt;

&lt;p&gt;Now comes the setup of the tools that I need and use every day. I will not go into detail about all of them here, because most of them are well known or the installation is incredibly easy. The ones, that don’t need much explanation are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Azure CLI&lt;/strong&gt; (&lt;code&gt;curl -sL https://aka.ms/InstallAzureCLIDeb | sudo zsh&lt;/code&gt;) + &lt;strong&gt;kubectl&lt;/strong&gt; (handy installer via Azure CLI: &lt;em&gt;&lt;code&gt;az aks install-cli&lt;/code&gt;&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node Version Manager NVM&lt;/strong&gt; (&lt;code&gt;wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | zsh&lt;/code&gt;) and &lt;strong&gt;NodeJS LTS&lt;/strong&gt; version (current: nvm install v12.18.0)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure Functions Core Tools&lt;/strong&gt; (npm install -g azure-functions-core-tools@3)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt; (&lt;a href="https://www.terraform.io/downloads.html" rel="noopener noreferrer"&gt;https://www.terraform.io/downloads.html&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Helm&lt;/strong&gt; via &lt;em&gt;get_helm.sh&lt;/em&gt; script(&lt;code&gt;curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Go&lt;/strong&gt; (&lt;code&gt;sudo apt install golang&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;.NET Core&lt;/strong&gt; (simply follow &lt;a href="https://docs.microsoft.com/en-us/dotnet/core/install/linux-package-manager-ubuntu-2004" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/dotnet/core/install/linux-package-manager-ubuntu-2004&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;jq&lt;/strong&gt; – work with JSON data in the shell (&lt;a href="https://stedolan.github.io/jq/download/" rel="noopener noreferrer"&gt;https://stedolan.github.io/jq/download/&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Give me more…!
&lt;/h2&gt;

&lt;p&gt;There are a few tools that I would like to discuss in more detail, as they are not necessarily widely used and known. These are mainly tools that are used when working with Kubernetes/Docker. This is exactly the area where &lt;em&gt;kubectx/kubens&lt;/em&gt; and &lt;em&gt;stern&lt;/em&gt; are located. &lt;em&gt;Docker for Windows&lt;/em&gt; and &lt;em&gt;Visual Studio Code&lt;/em&gt; are certainly well known to everyone and are familiar through daily work. The reason why I want to talk about the latter two is because they meanwhile tightly integrate with WSL2!&lt;/p&gt;

&lt;h3&gt;
  
  
  kubectx / kubens
&lt;/h3&gt;

&lt;p&gt;Who doesn’t know it? You work with Kubernetes and have to switch between clusters and/or namespaces all the time…forgetting the appropriate commands to set the context correctly and typing yourself “to death”. This is where the tools &lt;em&gt;kubectx&lt;/em&gt; and &lt;em&gt;kubens&lt;/em&gt; come in and help you to switch between different clusters and namespaces quickly and easily. I never want to work with a system again where these tools are not installed – honestly. To see &lt;em&gt;kubectx&lt;/em&gt;/&lt;em&gt;kubens&lt;/em&gt; in action, here are the samples from their &lt;a href="https://github.com/ahmetb/kubectx" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;repo:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fkubectx-demo.gif%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fkubectx-demo.gif%3Fw%3D1024"&gt;&lt;/a&gt;&lt;em&gt;kubectx &lt;/em&gt;in action&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fkubens-demo.gif%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fkubens-demo.gif%3Fw%3D1024"&gt;&lt;/a&gt;&lt;em&gt;kubens &lt;/em&gt;in action&lt;/p&gt;

&lt;p&gt;To install both tools, follow these steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo git clone https://github.com/ahmetb/kubectx /opt/kubectx$ sudo ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx$ sudo ln -s /opt/kubectx/kubens /usr/local/bin/kubensmkdir -p ~/.oh-my-zsh/completionschmod -R 755 ~/.oh-my-zsh/completionsln -s /opt/kubectx/completion/kubectx.zsh ~/.oh-my-zsh/completions/\_kubectx.zshln -s /opt/kubectx/completion/kubens.zsh ~/.oh-my-zsh/completions/\_kubens.zsh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To be able to work with the autocomletion features of these tools, you need to add the following line at the end of your &lt;em&gt;.zshrc&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;autoload -U compinit &amp;amp;&amp;amp; compinit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Congrats, productivity gain: 100% :)&lt;/p&gt;

&lt;h3&gt;
  
  
  stern
&lt;/h3&gt;

&lt;p&gt;_&lt;a href="https://github.com/wercker/stern" rel="noopener noreferrer"&gt;stern&lt;/a&gt;_allows you to output the logs of multiple Pods simultaneously to the local command line. In Kubernetes it is normal to have many services running at the same time that communicate with each other. It is sometimes difficult to follow a call through the cluster. With stern, this becomes relatively easy, because you can select pods by e.g. label selectors from which you want to follow the logs.&lt;/p&gt;

&lt;p&gt;With the command &lt;code&gt;stern -l application=scmcontacts&lt;/code&gt; e.g. you can stream the logs of all pods with the label &lt;em&gt;application=scmcontacts&lt;/em&gt; to your local shell…which then looks like that (each color represents another pod!):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fimage.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fimage.png%3Fw%3D1024"&gt;&lt;/a&gt;stern log streams&lt;/p&gt;

&lt;p&gt;To install stern, use this script:s&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.11.0/stern&lt;span class="se"&gt;\_&lt;/span&gt;linux&lt;span class="se"&gt;\_&lt;/span&gt;amd64
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chmod &lt;/span&gt;755 /usr/local/bin/stern
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  One more thing
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Docker for Windows&lt;/strong&gt; has been around for a long time and is probably running on your machine right now. What some people may not know is that Docker for Windows integrates seamlessly with WSL2. If you are already running Docker on Windows, a simple invocation of the settings is enough to enable Docker / WSL2 integration:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fdockerwsl.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fdockerwsl.png%3Fw%3D1024"&gt;&lt;/a&gt;Activate WSL2 based engine&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fdocker_wsl2.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F06%2Fdocker_wsl2.png%3Fw%3D1024"&gt;&lt;/a&gt;Choose WSL / distro integration&lt;/p&gt;

&lt;p&gt;If you want more details about the integration, please visit this page: &lt;a href="https://www.docker.com/blog/new-docker-desktop-wsl2-backend/" rel="noopener noreferrer"&gt;https://www.docker.com/blog/new-docker-desktop-wsl2-backend/&lt;/a&gt; For this article, the fact that Docker now runs within WSL2 is sufficient  :)&lt;/p&gt;

&lt;p&gt;Last but not least, one short note. Of course, &lt;strong&gt;Visual Studio Code&lt;/strong&gt; can also be integrated into WSL2. If you &lt;a href="https://code.visualstudio.com/download" rel="noopener noreferrer"&gt;install a current version&lt;/a&gt; of the editor in Windows, all components to run VS Code with WSL2 are included.&lt;/p&gt;

&lt;p&gt;A simple call of &lt;code&gt;code .&lt;/code&gt; in the respective directory with your source code is sufficient to install the Visual Studio Code Server (&lt;a href="https://github.com/cdr/code-server" rel="noopener noreferrer"&gt;https://github.com/cdr/code-server&lt;/a&gt;) in Ubuntu. This allows &lt;strong&gt;VSCode&lt;/strong&gt; to connect remotely to your distro and work with source code / Frameworks that are located in WSL2.&lt;/p&gt;

&lt;p&gt;That’s all :)&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-Up
&lt;/h2&gt;

&lt;p&gt;Pretty long blog post now, I know…but it contains all the tools that are necessary (take that with a “grain of salt” ;)) to make your Windows 10 machine a “wonderful experience” for you as a developer or architect in the “Cloud Native” space. You have a full compatible Linux “system”, which tightly integrates with Windows. You have .NET Core, Go, NodeJS, tools to work in the “Kubernetes universe”, the best code editor currently out there, git, ssh-agent etc. etc.…and a beautiful terminal which makes working with it simply fun!&lt;/p&gt;

&lt;p&gt;For sure, there are thing that I missed or I just don’t know at the moment. I would love to hear from you, if I forgot “the one” tool to mention. Looking forward to read your comments/suggestions!&lt;/p&gt;

&lt;p&gt;Hope this helps someone out there! Take care…&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@artplay?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Maxim Selyuk&lt;/a&gt; on &lt;a href="https://unsplash.com/@artplay?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>development</category>
      <category>windows10</category>
      <category>wsl2</category>
      <category>ohmyzsh</category>
    </item>
    <item>
      <title>Horizontal Autoscaling in Kubernetes #3 – KEDA</title>
      <dc:creator>Christian Dennig</dc:creator>
      <pubDate>Fri, 29 May 2020 08:23:07 +0000</pubDate>
      <link>https://forem.com/cdennig/horizontal-autoscaling-in-kubernetes-3-keda-24l6</link>
      <guid>https://forem.com/cdennig/horizontal-autoscaling-in-kubernetes-3-keda-24l6</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This is the last article in the series regarding “Horizontal Autoscaling” in Kubernetes. I began with an &lt;a href="https://dev.to/cdennig/horizontal-autoscaling-in-kubernetes-1-an-introduction-ab2"&gt;introduction to the topic&lt;/a&gt; and showed why autoscaling is important and how to get started with Kubernetes standard tools. In &lt;a href="https://partlycloudy.blog/2020/05/26/horizontal-autoscaling-in-kubernetes-2-custom-metrics" rel="noopener noreferrer"&gt;the second part&lt;/a&gt; I talked about how to use custom metrics in combination with the Horizontal Pod Autoscaler to be able to scale your deployments based on “non-standard” metrics (coming from e.g. Prometheus).&lt;/p&gt;

&lt;p&gt;This article now concludes the topic. I would like to show you how to use &lt;a href="https://keda.sh" rel="noopener noreferrer"&gt;KEDA&lt;/a&gt;(Kubernetes Event-Driven Autoscaler) for horizontal scaling and how this can simplify your life dramatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  KEDA
&lt;/h2&gt;

&lt;p&gt;KEDA, as the official documentation states, is a Kubernetes-based event-driven autoscaler. The project was originally initiated by Microsoft and has been developed under open source from the beginning. Since the first release, it has been widely adopted by the community and many other vendors such as Amazon, Google etc. have contributed source code / scalers to the project.&lt;/p&gt;

&lt;p&gt;It is safe to say that the project is “ready for production” and there is no need to fear that it will disappear in the coming years.&lt;/p&gt;

&lt;p&gt;KEDA, under the hood, works like the many other projects in this area and – first and foremost – acts as a “Custom Metrics API” for exposing metrics to the Horizontal Pod Autoscaler. Additionally, there is a “KEDA agent” that is responsible for managing / activating deployments and scale your workload between “0” and “n” replicas (scaling to zero pods is currently not possible “out-of-the-box” with the standard Horizontal Pod Autoscaler – but there is an alpha feature for it).&lt;/p&gt;

&lt;p&gt;So in a nutshell, KEDA takes away the (sometimes huge) pain of setting up such a solution. As seen in the last article, the usage of custom metrics can generate a lot of effort upfront. KEDA is much more “lightweight” and setting it up is just a piece of cake.&lt;/p&gt;

&lt;p&gt;This is how KEDA looks under the hood:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F05%2Fkeda.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F05%2Fkeda.png%3Fw%3D1024"&gt;&lt;/a&gt;Source: &lt;a href="https://keda.sh" rel="nofollow noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://keda.sh" rel="noopener noreferrer"&gt;https://keda.sh&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;KEDA comes with its own custom resource definition, the &lt;em&gt;ScaledObject&lt;/em&gt; which takes care of managing the “connection” between the source of the metric, access to the metric, your deployment and how scaling should work for it (min/max range of pods, threshold etc.).&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;ScaledObject&lt;/em&gt; spec looks like that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keda.k8s.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ScaledObject&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;scaled-object-name&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;scaleTargetRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;deploymentName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;deployment-name&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt; &lt;span class="c1"&gt;# must be in the same namespace as the ScaledObject&lt;/span&gt;
    &lt;span class="na"&gt;containerName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;container-name&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;  &lt;span class="c1"&gt;#Optional. Default: deployment.spec.template.spec.containers[0]&lt;/span&gt;
  &lt;span class="na"&gt;pollingInterval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt;  &lt;span class="c1"&gt;# Optional. Default: 30 seconds&lt;/span&gt;
  &lt;span class="na"&gt;cooldownPeriod&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="m"&gt;300&lt;/span&gt; &lt;span class="c1"&gt;# Optional. Default: 300 seconds&lt;/span&gt;
  &lt;span class="na"&gt;minReplicaCount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;   &lt;span class="c1"&gt;# Optional. Default: 0&lt;/span&gt;
  &lt;span class="na"&gt;maxReplicaCount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100&lt;/span&gt; &lt;span class="c1"&gt;# Optional. Default: 100&lt;/span&gt;
  &lt;span class="na"&gt;triggers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# {list of triggers to activate the deployment}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The only thing missing now is the last component of KEDA: Scalers. Scalers are adapters that provide metrics from various sources. Among others, there are scalers for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kafka &lt;/li&gt;
&lt;li&gt;Prometheus &lt;/li&gt;
&lt;li&gt;AWS SQS Queue &lt;/li&gt;
&lt;li&gt;AWS Kinesis Stream &lt;/li&gt;
&lt;li&gt;GCP Pub/Sub &lt;/li&gt;
&lt;li&gt;Azure Blob Storage &lt;/li&gt;
&lt;li&gt;Azure EventHubs &lt;/li&gt;
&lt;li&gt;Azure ServiceBus &lt;/li&gt;
&lt;li&gt;Azure Monitor &lt;/li&gt;
&lt;li&gt;NATS &lt;/li&gt;
&lt;li&gt;MySQL &lt;/li&gt;
&lt;li&gt;etc. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A complete list of supported adapters can be found here: &lt;a href="https://keda.sh/docs/1.4/scalers/" rel="noopener noreferrer"&gt;https://keda.sh/docs/1.4/scalers/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As we will see later in the article, setting up such a scaler couldn’t be easier. So now, let’s get our hands dirty.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sample
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Install KEDA
&lt;/h3&gt;

&lt;p&gt;To install KEDA on a Kubernetes cluster, we use Helm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;helm repo add kedacore https://kedacore.github.io/charts
&lt;span class="nv"&gt;$ &lt;/span&gt;helm repo update
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create namespace keda
&lt;span class="nv"&gt;$ &lt;/span&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;keda kedacore/keda &lt;span class="nt"&gt;--namespace&lt;/span&gt; keda
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;BTW, if you use Helm 3, you will probably receive errors/warnings regarding a hook called “crd-install”. You can simply ignore them, see &lt;a href="https://github.com/kedacore/charts/issues/18" rel="noopener noreferrer"&gt;https://github.com/kedacore/charts/issues/18&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Sample Application
&lt;/h3&gt;

&lt;p&gt;To show you how KEDA works, I will simply reuse the sample application from the &lt;a href="https://partlycloudy.blog/2020/05/26/horizontal-autoscaling-in-kubernetes-2-custom-metrics" rel="noopener noreferrer"&gt;previous blog post&lt;/a&gt;. If you missed it, here’s a brief summary of what it’s like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a simple NodeJS / Express application&lt;/li&gt;
&lt;li&gt;using Prometheus client to expose the &lt;em&gt;/metrics&lt;/em&gt; endpoint and to create/add a custom (gauge) metric called &lt;em&gt;custom_metric_counter_total_by_pod&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;the metric can be set from outside via &lt;em&gt;/api/count&lt;/em&gt; endpoint&lt;/li&gt;
&lt;li&gt;Kubernetes service is of type &lt;em&gt;LoadBalancer&lt;/em&gt; to receive a public IP for it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s the sourcecode for the application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;os&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;os&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;apiMetrics&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;prometheus-api-metrics&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;apiMetrics&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;prom-client&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Create Prometheus Gauge metric&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;gauge&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Gauge&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;custom_metric_counter_total_by_pod&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;help&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Custom metric: Count per Pod&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;labelNames&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;pod&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/api/count&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Set metric to count value...and label to "pod name" (hostname)&lt;/span&gt;
  &lt;span class="nx"&gt;gauge&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;pod&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;hostname&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;count&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Counter at: &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;count&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Server is running on port 4000&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="c1"&gt;// initialize gauge&lt;/span&gt;
  &lt;span class="nx"&gt;gauge&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;pod&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;hostname&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here’s the deployment manifest for the application and the Kubernetes service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt; 
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promdemo&lt;/span&gt; 
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promtest&lt;/span&gt;
    &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RollingUpdate&lt;/span&gt;
    &lt;span class="na"&gt;rollingUpdate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;maxSurge&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;maxUnavailable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;minReadySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;revisionHistoryLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promtest&lt;/span&gt;
      &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promtest&lt;/span&gt;
        &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;automountServiceAccountToken&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application&lt;/span&gt;
          &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;64Mi"&lt;/span&gt;
              &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;100m"&lt;/span&gt;
            &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;256Mi"&lt;/span&gt;
              &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;csaocpger/expressmonitoring:4.3&lt;/span&gt;
          &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IfNotPresent&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4000&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promdemo&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promtest&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4000&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4000&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promtest&lt;/span&gt;
    &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Collect Custom Metrics with Prometheus
&lt;/h3&gt;

&lt;p&gt;In order to have a “metrics collector” that we can use in combination with KEDA, we need to install Prometheus. Due to the &lt;em&gt;kube-prometheus&lt;/em&gt; project, this is not much of a problem. Go to &lt;a href="https://github.com/coreos/kube-prometheus" rel="noopener noreferrer"&gt;https://github.com/coreos/kube-prometheus&lt;/a&gt;, clone it to your local machine and install Prometheus, Grafana etc. via:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/setup
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the installation has finished, we also need to tell Prometheus to scrape the_ /metrics _endpoint of our application. Therefore, we need to create a _ServiceMonitor _which is a custom resource definition from Prometheus, pointing to “a source of metrics”. Apply the following Kubernetes manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceMonitor&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;monitoring.coreos.com/v1&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promtest&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promtest&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promtest&lt;/span&gt;
  &lt;span class="na"&gt;endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So now, let’s check, if everything is in place regarding the monitoring of the application. This can easily be achieved by having a look at the Prometheus targets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;--namespace&lt;/span&gt; monitoring port-forward svc/prometheus-k8s 9090
Forwarding from 127.0.0.1:9090 -&amp;gt; 9090
Forwarding from &lt;span class="o"&gt;[&lt;/span&gt;::1]:9090 -&amp;gt; 9090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open your local browser at &lt;a href="http://localhost:9090/targets" rel="noopener noreferrer"&gt;http://localhost:9090/targets&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F05%2Fscrape-targets.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F05%2Fscrape-targets.png%3Fw%3D1024"&gt;&lt;/a&gt;Prometheus targets for our application&lt;/p&gt;

&lt;p&gt;This looks good. Now, we can also check Grafana. Therefor, also “port-forward” the Grafana service to your local machine (&lt;code&gt;kubectl --namespace monitoring port-forward svc/grafana&lt;/code&gt; &lt;code&gt;3000&lt;/code&gt;) and open the browser at &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt; (the definition of the dashboard you see here is available in the GitHub repo mentioned at the end of the blog post).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F05%2Fgrafana_before.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F05%2Fgrafana_before.png%3Fw%3D1024"&gt;&lt;/a&gt;Grafana&lt;/p&gt;

&lt;p&gt;As we can see here, the application reports a current value of “1” for the custom metric (&lt;em&gt;custom_metric_counter_total_by_pod&lt;/em&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  ScaledObject Definition
&lt;/h3&gt;

&lt;p&gt;So, the last missing piece to be able to scale the application with KEDA is the &lt;em&gt;ScaledObject&lt;/em&gt; definition. As mentioned before, this is the connection between the Kubernetes deployment, the metrics source/collector and the Kubernetes &lt;em&gt;HorizontalPodAutoscaler&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;For the current sample, the definition looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keda.k8s.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ScaledObject&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus-scaledobject&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;scaleTargetRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;deploymentName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promdemo&lt;/span&gt;
  &lt;span class="na"&gt;triggers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;serverAddress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://prometheus-k8s.monitoring.svc:9090&lt;/span&gt;
      &lt;span class="na"&gt;metricName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;custom_metric_counter_total_by_pod&lt;/span&gt;
      &lt;span class="na"&gt;threshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3'&lt;/span&gt;
      &lt;span class="na"&gt;query&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sum(custom_metric_counter_total_by_pod{namespace!="",pod!=""})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s just walk through the spec part of the definition…we tell KEDA the target we want to scale, in this case, the deployment (&lt;em&gt;scaleTargetRef&lt;/em&gt;) of the application. In the &lt;em&gt;triggers&lt;/em&gt; section, we point KEDA to our Prometheus service and specify the metric name, the &lt;em&gt;query&lt;/em&gt; to issue to collect the current value of the metric and the &lt;em&gt;threshold&lt;/em&gt; – the target value for the Horizontal Pod Autoscaler that will automatically be created and managed by KEDA.&lt;/p&gt;

&lt;p&gt;And how does that HPA look like? Here is the current version of it as a result of the applied &lt;em&gt;ScaledObject&lt;/em&gt; definition from above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;autoscaling/v1&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HorizontalPodAutoscaler&lt;/span&gt;
  &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;autoscaling.alpha.kubernetes.io/conditions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;[{"type":"AbleToScale","status":"True","lastTransitionTime":"2020-05-20T07:02:21Z","reason":"ReadyForNewScale","message":"recommended&lt;/span&gt;
        &lt;span class="s"&gt;size&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;matches&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;current&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;size"},{"type":"ScalingActive","status":"True","lastTransitionTime":"2020-05-20T07:02:21Z","reason":"ValidMetricFound","message":"the&lt;/span&gt;
        &lt;span class="s"&gt;HPA&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;was&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;able&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;successfully&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;calculate&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;a&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;replica&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;count&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;from&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;external&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;metric&lt;/span&gt;
        &lt;span class="s"&gt;custom_metric_counter_total_by_pod(\u0026LabelSelector{MatchLabels:map[string]string{deploymentName:&lt;/span&gt;
        &lt;span class="s"&gt;promdemo,},MatchExpressions:[]LabelSelectorRequirement{},})"},{"type":"ScalingLimited","status":"False","lastTransitionTime":"2020-05-20T07:02:21Z","reason":"DesiredWithinRange","message":"the&lt;/span&gt;
        &lt;span class="s"&gt;desired&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;count&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;is&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;within&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;the&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;acceptable&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;range"}]'&lt;/span&gt;
      &lt;span class="na"&gt;autoscaling.alpha.kubernetes.io/current-metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;[{"type":"External","external":{"metricName":"custom_metric_counter_total_by_pod","metricSelector":{"matchLabels":{"deploymentName":"promdemo"}},"currentValue":"1","currentAverageValue":"1"}}]'&lt;/span&gt;
      &lt;span class="na"&gt;autoscaling.alpha.kubernetes.io/metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;[{"type":"External","external":{"metricName":"custom_metric_counter_total_by_pod","metricSelector":{"matchLabels":{"deploymentName":"promdemo"}},"targetAverageValue":"3"}}]'&lt;/span&gt;
    &lt;span class="na"&gt;creationTimestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2020-05-20T07:02:05Z"&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app.kubernetes.io/managed-by&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keda-operator&lt;/span&gt;
      &lt;span class="na"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keda-hpa-promdemo&lt;/span&gt;
      &lt;span class="na"&gt;app.kubernetes.io/part-of&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus-scaledobject&lt;/span&gt;
      &lt;span class="na"&gt;app.kubernetes.io/version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.4.1&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keda-hpa-promdemo&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
    &lt;span class="na"&gt;ownerReferences&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keda.k8s.io/v1alpha1&lt;/span&gt;
      &lt;span class="na"&gt;blockOwnerDeletion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;controller&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ScaledObject&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus-scaledobject&lt;/span&gt;
      &lt;span class="na"&gt;uid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;3074e89f-3b3b-4c7e-a376-09ad03b5fcb3&lt;/span&gt;
    &lt;span class="na"&gt;resourceVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;821883"&lt;/span&gt;
    &lt;span class="na"&gt;selfLink&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/apis/autoscaling/v1/namespaces/default/horizontalpodautoscalers/keda-hpa-promdemo&lt;/span&gt;
    &lt;span class="na"&gt;uid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;382a20d0-8358-4042-8718-bf2bcf832a31&lt;/span&gt;
  &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;maxReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100&lt;/span&gt;
    &lt;span class="na"&gt;minReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;scaleTargetRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promdemo&lt;/span&gt;
  &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;currentReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;desiredReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;lastScaleTime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2020-05-25T10:26:38Z"&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;List&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;resourceVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
  &lt;span class="na"&gt;selfLink&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see, the &lt;em&gt;ScaledObject&lt;/em&gt; properties have been added as annotations to the HPA and that it was already able to fetch the metric value from Prometheus.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scale Application
&lt;/h3&gt;

&lt;p&gt;Now, let’s see KEDA in action…&lt;/p&gt;

&lt;p&gt;The application exposes an endpoint (&lt;em&gt;/api/count&lt;/em&gt;) with which we can set the value of the metric in Prometheus (again, as a reminder, the value of &lt;em&gt;custom_metric_counter_total_by_pod&lt;/em&gt; is currently set to “1”).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F05%2Fkeda-before.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F05%2Fkeda-before.png%3Fw%3D1024"&gt;&lt;/a&gt;Custom metric before adjusting the value&lt;/p&gt;

&lt;p&gt;Now let’s set the value to &lt;strong&gt;“7”&lt;/strong&gt;. With the current threshold of &lt;strong&gt;“3”&lt;/strong&gt; , KEDA should scale up to at least three pods.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="nt"&gt;--request&lt;/span&gt; POST &lt;span class="s1"&gt;'http://&amp;lt;EXTERNAL_IP_OF_SERVICE&amp;gt;:4000/api/count'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--data-raw&lt;/span&gt; &lt;span class="s1"&gt;'{
    "count": 7
}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s look at the events within Kubernetes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;k get events
LAST SEEN   TYPE     REASON              OBJECT                                      MESSAGE
2m25s       Normal   SuccessfulRescale   horizontalpodautoscaler/keda-hpa-promdemo   New size: 3&lt;span class="p"&gt;;&lt;/span&gt; reason: external metric custom_metric_counter_total_by_pod&lt;span class="o"&gt;(&lt;/span&gt;&amp;amp;LabelSelector&lt;span class="o"&gt;{&lt;/span&gt;MatchLabels:map[string]string&lt;span class="o"&gt;{&lt;/span&gt;deploymentName: promdemo,&lt;span class="o"&gt;}&lt;/span&gt;,MatchExpressions:[]LabelSelectorRequirement&lt;span class="o"&gt;{}&lt;/span&gt;,&lt;span class="o"&gt;})&lt;/span&gt; above target
&amp;lt;unknown&amp;gt;   Normal   Scheduled           pod/promdemo-56946cb44-bn75c                Successfully assigned default/promdemo-56946cb44-bn75c to aks-nodepool1-12224416-vmss000001
2m24s       Normal   Pulled              pod/promdemo-56946cb44-bn75c                Container image &lt;span class="s2"&gt;"csaocpger/expressmonitoring:4.3"&lt;/span&gt; already present on machine
2m24s       Normal   Created             pod/promdemo-56946cb44-bn75c                Created container application
2m24s       Normal   Started             pod/promdemo-56946cb44-bn75c                Started container application
&amp;lt;unknown&amp;gt;   Normal   Scheduled           pod/promdemo-56946cb44-mztrk                Successfully assigned default/promdemo-56946cb44-mztrk to aks-nodepool1-12224416-vmss000002
2m24s       Normal   Pulled              pod/promdemo-56946cb44-mztrk                Container image &lt;span class="s2"&gt;"csaocpger/expressmonitoring:4.3"&lt;/span&gt; already present on machine
2m24s       Normal   Created             pod/promdemo-56946cb44-mztrk                Created container application
2m24s       Normal   Started             pod/promdemo-56946cb44-mztrk                Started container application
2m25s       Normal   SuccessfulCreate    replicaset/promdemo-56946cb44               Created pod: promdemo-56946cb44-bn75c
2m25s       Normal   SuccessfulCreate    replicaset/promdemo-56946cb44               Created pod: promdemo-56946cb44-mztrk
2m25s       Normal   ScalingReplicaSet   deployment/promdemo                         Scaled up replica &lt;span class="nb"&gt;set &lt;/span&gt;promdemo-56946cb44 to 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And this is how the Grafana dashboard looks like…&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F05%2Fkeda-after.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F05%2Fkeda-after.png%3Fw%3D1024"&gt;&lt;/a&gt;Grafana dashboard after setting a new value for the metric&lt;/p&gt;

&lt;p&gt;As you can see here, KEDA is incredibly fast in querying the metric and the corresponding scaling process (in combination with the HPA) of the target deployment. The deployment has been scaled up to three pods, based on a custom metric – all we wanted to achieve :)&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-Up
&lt;/h2&gt;

&lt;p&gt;So, this was the last article about horizontal scaling in Kubernetes. In my opinion the community has provided a powerful and simple tool with KEDA to trigger scaling processes based on external sources / custom metrics. If you compare this to the sample in the previous article, KEDA is way easier to setup! By the way, it works perfectly in combination with &lt;strong&gt;Azure Functions&lt;/strong&gt; , which can be run in a Kubernetes cluster based on Docker images Microsoft provides. Cream of the crop is then to outsource such workloads on &lt;strong&gt;virtual nodes&lt;/strong&gt; , so that the actual cluster is not stressed. But this is a topic for a possible future article :)&lt;/p&gt;

&lt;p&gt;I hope I was able to shed some light on how horizontal scaling works and how to automatically scale your own deployments based on metrics coming from external / custom sources. If you miss something, give me a short hint :)&lt;/p&gt;

&lt;p&gt;As always, you can find the source code for this article on GitHub: &lt;a href="https://github.com/cdennig/k8s-keda" rel="noopener noreferrer"&gt;https://github.com/cdennig/k8s-keda&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So long…&lt;/p&gt;

&lt;p&gt;Articles in this series:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Part 1: &lt;a href="https://dev.to/cdennig/horizontal-autoscaling-in-kubernetes-1-an-introduction-ab2"&gt;https://partlycloudy.blog/2020/05/26/horizontal-autoscaling-in-kubernetes-1-an-introduction/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Part 2: &lt;a href="https://dev.to/cdennig/horizontal-autoscaling-in-kubernetes-2-custom-metrics-549k"&gt;https://partlycloudy.blog/2020/05/27/horizontal-autoscaling-in-kubernetes-2-custom-metrics/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Part 3: &lt;a href="https://partlycloudy.blog/?p=2417" rel="noopener noreferrer"&gt;https://partlycloudy.blog/2020/05/29/horizontal-autoscaling-in-kubernetes-3—keda/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>keda</category>
      <category>kubernetes</category>
      <category>autoscaling</category>
      <category>prometheus</category>
    </item>
    <item>
      <title>Horizontal Autoscaling in Kubernetes #2 – Custom Metrics</title>
      <dc:creator>Christian Dennig</dc:creator>
      <pubDate>Wed, 27 May 2020 09:15:18 +0000</pubDate>
      <link>https://forem.com/cdennig/horizontal-autoscaling-in-kubernetes-2-custom-metrics-549k</link>
      <guid>https://forem.com/cdennig/horizontal-autoscaling-in-kubernetes-2-custom-metrics-549k</guid>
      <description>&lt;p&gt;In &lt;a href="https://partlycloudy.blog/2020/05/26/horizontal-autoscaling-in-kubernetes-1---an-introduction/"&gt;my previous article&lt;/a&gt;, I talked about the “why” and “how” of horizontal scaling in Kubernetes and I gave an insight, how to use the &lt;em&gt;Horizontal Pod Autoscalar&lt;/em&gt; based on CPU metrics. As mentioned there, CPU is not always the best choice when it comes to deciding whether the application/service should scale in or out. Therefore Kubernetes (with the concept of the Metrics Registry and the Custom or External Metrics API) offers the possibility to also scale based on your own, custom metrics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this post, I will show you how to scale a deployment (NodeJS / Express app) based on a custom metric which is collected by Prometheus. I chose Prometheus, as it is one of the most popular montoring tools in the Kubernetes space…but, it can easily be exchanged by any other montoring solution out there – as long as there is a corresponding “adapter” (custom metrics API) for it. But more on that later.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does it work?
&lt;/h2&gt;

&lt;p&gt;To be able to work at all with custom metrics as a basis for scaling services, various requirements must be met. Of course, you need an application that provides the appropriate metrics &lt;strong&gt;(the metrics source)&lt;/strong&gt;. In addition, you need a service – in our case Prometheus – that is able to query the metrics from the application at certain intervals &lt;strong&gt;(the metrics collector)&lt;/strong&gt;. Once you have set up these things, you have fulfilled the basic requirements from the application’s point of view. These components are probably already present in every larger solution.&lt;/p&gt;

&lt;p&gt;Now, to make the desired metric available to the Horizontal Pod Autoscaler, it must first be added to a &lt;strong&gt;Metrics Registry&lt;/strong&gt;. And last but not least, a &lt;strong&gt;custom metrics API&lt;/strong&gt; must provide access to the desired metric for the Horizontal Pod Autoscaler.&lt;/p&gt;

&lt;p&gt;To give you the complete view of what’s possible…here’s the list of available metric types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Resource metrics (predefined metrics like CPU)&lt;/li&gt;
&lt;li&gt;Custom metrics (associated with a Kubernetes object)&lt;/li&gt;
&lt;li&gt;External metrics (coming from external sources like e.g. RabbitMQ, Azure Service Bus etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--n5dZnLZO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/metrics_pipeline.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--n5dZnLZO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/metrics_pipeline.png%3Fw%3D1024" alt="Metrics Pipeline (with Prometheus as metrics collector)"&gt;&lt;/a&gt;Metrics Pipeline (with Prometheus as metrics collector)&lt;/p&gt;

&lt;h2&gt;
  
  
  Sample
&lt;/h2&gt;

&lt;p&gt;In order to show you a working sample of how to use a custom metric for scaling, we need have a few things in place/installed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An application (deployment) that exposes a custom metric&lt;/li&gt;
&lt;li&gt;Prometheus (incl. Grafana to have some nice charts) and a Prometheus Service Monitor to scrape the metrics endpoint of the application&lt;/li&gt;
&lt;li&gt;Prometheus Adapter which is able to provide a Prometheus metric for the custom metrics API&lt;/li&gt;
&lt;li&gt;Horizontal Pod Autoscaler definition that references the custom metric&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prometheus
&lt;/h3&gt;

&lt;p&gt;To install Prometheus, I chose “kube-prometheus” (&lt;a href="https://github.com/coreos/kube-prometheus"&gt;https://github.com/coreos/kube-prometheus&lt;/a&gt;) which installs Prometheus as well as Grafana (and Alertmanager etc.) and is super easy to use! So first, clone the project to your local machine and deploy it to your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# from within the cloned repo...&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/setup
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Wait a few seconds until everything is installed and then check access to the Prometheus Web UI and Grafana (by port-forwarding the services to your local machine):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;--namespace&lt;/span&gt; monitoring port-forward svc/prometheus-k8s 9090
Forwarding from 127.0.0.1:9090 -&amp;gt; 9090
Forwarding from &lt;span class="o"&gt;[&lt;/span&gt;::1]:9090 -&amp;gt; 9090
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_kyPSYEW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/prometheus.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_kyPSYEW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/prometheus.png%3Fw%3D1024" alt="Prometheus / Web UI"&gt;&lt;/a&gt;Prometheus / Web UI&lt;/p&gt;

&lt;p&gt;Check the same for Grafana (if you are promted for a username and password, default is: “admin”/”admin” – you need to change that on the first login):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;--namespace&lt;/span&gt; monitoring port-forward svc/grafana 3000
Forwarding from 127.0.0.1:3000 -&amp;gt; 3000
Forwarding from &lt;span class="o"&gt;[&lt;/span&gt;::1]:3000 -&amp;gt; 3000
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VydM8avh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/grafana.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VydM8avh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/grafana.png%3Fw%3D1024" alt="Grafana"&gt;&lt;/a&gt;Grafana&lt;/p&gt;

&lt;p&gt;In terms of “monitoring infrastructure”, we are good to go. Let’s add the sample application that exposes the custom metric.&lt;/p&gt;

&lt;h3&gt;
  
  
  Metrics Source / Sample Application
&lt;/h3&gt;

&lt;p&gt;To demonstrate how to work with custom metrics, I wrote a &lt;strong&gt;very&lt;/strong&gt; simple NodeJS application that provides a single endpoint (from the application perspective). If requests are sent to this endpoint, a counter (Prometheus Gauge – later more on other options) is set with a value provided from the body of the request. The application itself uses the Express framework and an additional library that allows to “interact” with Prometheus (prom-client) – to provide metrics via a &lt;em&gt;/metrics&lt;/em&gt; endpoint.&lt;/p&gt;

&lt;p&gt;This is how it looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;os&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;os&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;apiMetrics&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;prometheus-api-metrics&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;apiMetrics&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;json&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;prom-client&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Create Prometheus Gauge metric&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;gauge&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Gauge&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;custom_metric_counter_total_by_pod&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;help&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Custom metric: Count per Pod&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;labelNames&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;pod&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/api/count&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Set metric to count value...and label to "pod name" (hostname)&lt;/span&gt;
  &lt;span class="nx"&gt;gauge&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;set&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;pod&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;hostname&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;count&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Counter at: &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;count&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Server is running on port 4000&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="c1"&gt;// initialize gauge&lt;/span&gt;
  &lt;span class="nx"&gt;gauge&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;set&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;pod&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;hostname&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As you can see in the sourcecode, a “Gauge” metric is created – which is one of the types, Prometheus supports. Here’s a list of what metrics are offered (description from the official documentation):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Counter&lt;/strong&gt; – a cumulative metric that represents a single monotonically increasing counter whose value can only increase or be reset to zero on restart&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gauge&lt;/strong&gt; – a gauge is a metric that represents a single numerical value that can arbitrarily go up and down&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Histogram&lt;/strong&gt; – a histogram samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. It also provides a sum of all observed values.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Summary&lt;/strong&gt; – Similar to a &lt;em&gt;histogram&lt;/em&gt;, a &lt;em&gt;summary&lt;/em&gt; samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To learn more about the metrics types and when to use what, see the &lt;a href="https://prometheus.io/docs/concepts/metric_types/"&gt;official Prometheus documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s deploy the application (plus a service for it) with the following YAML manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt; 
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promdemo&lt;/span&gt; 
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promtest&lt;/span&gt;
    &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RollingUpdate&lt;/span&gt;
    &lt;span class="na"&gt;rollingUpdate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;maxSurge&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;maxUnavailable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;minReadySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;revisionHistoryLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promtest&lt;/span&gt;
      &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promtest&lt;/span&gt;
        &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;automountServiceAccountToken&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application&lt;/span&gt;
          &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;64Mi"&lt;/span&gt;
              &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;100m"&lt;/span&gt;
            &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;256Mi"&lt;/span&gt;
              &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;csaocpger/expressmonitoring:4.3&lt;/span&gt;
          &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IfNotPresent&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4000&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promdemo&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promtest&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4000&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4000&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promtest&lt;/span&gt;
    &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that we have Prometheus installed and an application that exposes a custom metric, we also need to tell Prometheus to scrape the &lt;em&gt;/metrics&lt;/em&gt; endpoint (BTW, this endpoint is automatically created by one of the libraries used in the app). Therefore, we need to create a &lt;em&gt;ServiceMonitor&lt;/em&gt; which is a custom resource definition from Prometheus, pointing “a source of metrics”.&lt;/p&gt;

&lt;p&gt;This is how the &lt;em&gt;ServiceMonitor&lt;/em&gt; looks like for the current sample:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceMonitor&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;monitoring.coreos.com/v1&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promtest&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promtest&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;application&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promtest&lt;/span&gt;
  &lt;span class="na"&gt;endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;What that basically does, is telling Prometheus to look for a service called “promtest” and scrape the metrics via the (default) endpoint &lt;em&gt;/metrics&lt;/em&gt; on the &lt;em&gt;http&lt;/em&gt; port (which is set to port 4000 in the Kubernetes service).&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;/metrics&lt;/em&gt; endpoint reports values like that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="nt"&gt;--request&lt;/span&gt; GET &lt;span class="s1"&gt;'http://&amp;lt;EXTERNAL_IP_OF_SERVICE&amp;gt;:4000/metrics'&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;custom_metric
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 26337  100 26337    0     0   325k      0 &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- &lt;span class="nt"&gt;--&lt;/span&gt;:--:--  329k
&lt;span class="c"&gt;# HELP custom_metric_counter_total_by_pod Custom metric: Count per Pod&lt;/span&gt;
&lt;span class="c"&gt;# TYPE custom_metric_counter_total_by_pod gauge&lt;/span&gt;
custom_metric_counter_total_by_pod&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;pod&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"promdemo-56946cb44-d5846"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; 1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When the &lt;em&gt;ServiceMonitor&lt;/em&gt; has been applied, Prometheus will be able to discover the pods/endpoints behind the service and pull the corresponding metrics. In the web UI, you should be able to see the following “scrape target”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1CXlkBG7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/prom-targets.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1CXlkBG7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/prom-targets.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Prometheus Targets&lt;/p&gt;

&lt;p&gt;Time to test the environment! First, let’s see how the current metric looks like. Open Grafana an have a look at the Custom dashboard (you can find the JSON for the dashboard in the GitHub repo mentioned at the end of the post). You see, we have one pod running in the cluster reporting a value of “1” at the moment .&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oi9-4f8v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/grafana_metric_before.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oi9-4f8v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/grafana_metric_before.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Grafana Showing the Custom Metric&lt;/p&gt;

&lt;p&gt;If everything is set up correctly, we should be able to call our service on &lt;em&gt;/api/count&lt;/em&gt; and set the custom metric via a POST request with a JSON document that looks like that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"count"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So, let’s try this out…&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="nt"&gt;--request&lt;/span&gt; POST &lt;span class="s1"&gt;'http://&amp;lt;EXTERNAL_IP_OF_SERVICE&amp;gt;:4000/api/count'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--data-raw&lt;/span&gt; &lt;span class="s1"&gt;'{
    "count": 7
}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---Ni9KaAX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/grafana_metric_after.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---Ni9KaAX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/grafana_metric_after.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Grafana Dashboard – After setting the counter to a new value&lt;/p&gt;

&lt;p&gt;So, this works as expected. After setting the value via a POST request to “7”, Prometheus receives the updated value by scraping the metrics endpoint and Grafana is able to show the updated chart. To be able to execute the full example in the end on the basis of a “clean environment”, we set the counter back to “1”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="nt"&gt;--request&lt;/span&gt; POST &lt;span class="s1"&gt;'http://&amp;lt;EXTERNAL_IP_OF_SERVICE&amp;gt;:4000/api/count'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--data-raw&lt;/span&gt; &lt;span class="s1"&gt;'{
    "count": 1
}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;From a monitoring point of view, we have finished all the neccessary work and are now able to add the Prometheus adapter which “connects” the Prometheus custom metric with the Kubernetes Horizontal Pod Autoscaler.&lt;/p&gt;

&lt;p&gt;Let’s do that now.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prometheus Adapter
&lt;/h3&gt;

&lt;p&gt;So, now the adapter must be installed. I have decided to use the following implementation for this sample: &lt;a href="https://github.com/DirectXMan12/k8s-prometheus-adapter"&gt;https://github.com/DirectXMan12/k8s-prometheus-adapter&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The installation works quite smoothly, but you have to adapt a few small things for it.&lt;/p&gt;

&lt;p&gt;First you should clone the repository and change the &lt;strong&gt;URL for the Prometheus server&lt;/strong&gt; in the file &lt;em&gt;k8s-prometheus-adapter/deploy/manifests/custom-metrics-apiserver-deployment.yaml&lt;/em&gt;. In the current case, this is &lt;a href="http://prometheus-k8s.monitoring.svc:9090/"&gt;&lt;em&gt;http://prometheus-k8s.monitoring.svc:9090/&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Furthermore, an additional rule for our custom metric must be defined in the ConfigMap, which defines the rules for mapping from Prometheus metrics to the Metrics API schema (&lt;em&gt;k8s-prometheus-adapter/deploy/manifests/custom-metrics-config-map.yaml&lt;/em&gt;). This rule looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;seriesQuery&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;custom_metric_counter_total_by_pod{namespace!="",pod!=""}'&lt;/span&gt;
      &lt;span class="na"&gt;seriesFilters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
      &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;overrides&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;namespace&lt;/span&gt;
          &lt;span class="na"&gt;pod&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pod&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;matches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;^(.*)_total_by_pod"&lt;/span&gt;
        &lt;span class="na"&gt;as&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${1}"&lt;/span&gt;
      &lt;span class="na"&gt;metricsQuery&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sum(&amp;lt;&amp;lt;.Series&amp;gt;&amp;gt;{&amp;lt;&amp;lt;.LabelMatchers&amp;gt;&amp;gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you want to find out more about how the mapping works, please have a look at the &lt;a href="https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/config-walkthrough.md"&gt;official documentation&lt;/a&gt;. In our case, only the query for &lt;em&gt;custom_metric_counter_total_by_pod&lt;/em&gt; is executed and the results are mapped to the metrics schema as total/sum values.&lt;/p&gt;

&lt;p&gt;To enable the adapter to function as a custom metrics API in the cluster, a SSL certificate must be created that can be used by the Prometheus adapter. All traffic from the Kubernetes control plane components to each other must be secured by SSL. This SSL certificate must be added upfront as a secret in the cluster, so that the Custom Metrics Server can automatically map it as a volume during deployment.&lt;/p&gt;

&lt;p&gt;In the corresponding GitHub repository for this article, you can find a &lt;em&gt;gencerts&lt;/em&gt;.&lt;em&gt;sh&lt;/em&gt; file that needs to be executed and does all the heavy-lifting for you. The result of the script is a file called &lt;em&gt;cm-adapter-serving-certs.yaml&lt;/em&gt; containing the certificate. Please add that secret to the cluster before installing the adpater. To whole process looks like this (in the folder of the git clone):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;./gencerts.sh
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create namespace custom-metrics
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; cm-adapter-serving-certs.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; custom-metrics
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As soon as the installation is completed, open a terminal and let’s query the custom metrics API for our metric called &lt;em&gt;custom_metric_counter_total_by_pod&lt;/em&gt; via kubectl. If everything is set up correctly, we should be able to get a result from the metrics server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get &lt;span class="nt"&gt;--raw&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/custom_metric_counter_total_by_pod?pod=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-o&lt;/span&gt; name&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq

&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"kind"&lt;/span&gt;: &lt;span class="s2"&gt;"MetricValueList"&lt;/span&gt;,
  &lt;span class="s2"&gt;"apiVersion"&lt;/span&gt;: &lt;span class="s2"&gt;"custom.metrics.k8s.io/v1beta1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"metadata"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"selfLink"&lt;/span&gt;: &lt;span class="s2"&gt;"/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/custom_metric_counter_total_by_pod"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;,
  &lt;span class="s2"&gt;"items"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
    &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"describedObject"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"kind"&lt;/span&gt;: &lt;span class="s2"&gt;"Pod"&lt;/span&gt;,
        &lt;span class="s2"&gt;"namespace"&lt;/span&gt;: &lt;span class="s2"&gt;"default"&lt;/span&gt;,
        &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"promdemo-56946cb44-d5846"&lt;/span&gt;,
        &lt;span class="s2"&gt;"apiVersion"&lt;/span&gt;: &lt;span class="s2"&gt;"/v1"&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;,
      &lt;span class="s2"&gt;"metricName"&lt;/span&gt;: &lt;span class="s2"&gt;"custom_metric_counter_total_by_pod"&lt;/span&gt;,
      &lt;span class="s2"&gt;"timestamp"&lt;/span&gt;: &lt;span class="s2"&gt;"2020-05-26T12:22:15Z"&lt;/span&gt;,
      &lt;span class="s2"&gt;"value"&lt;/span&gt;: &lt;span class="s2"&gt;"1"&lt;/span&gt;,
      &lt;span class="s2"&gt;"selector"&lt;/span&gt;: null
    &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Here we go! The Custom Metrics API returns a result for our custom metric – stating that the current value is “1”.&lt;/p&gt;

&lt;p&gt;I have to admit it was a lot of work, but we are now finished with the infrastructure and can test our application in combination with the Horizontal Pod Autoscaler and our custom metric.&lt;/p&gt;

&lt;h3&gt;
  
  
  Horizontal Pod Autoscaler
&lt;/h3&gt;

&lt;p&gt;Now we need to deploy a Horizontal Pod Autoscaler that targets our app deployment and references the custom metric &lt;em&gt;custom_metric_counter_total_by_pod&lt;/em&gt; as a metrics source. The manifest file for it looks like this, apply it to the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HorizontalPodAutoscaler&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;autoscaling/v2beta1&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus-demo-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;scaleTargetRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;promdemo&lt;/span&gt;
  &lt;span class="na"&gt;minReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;maxReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
  &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pods&lt;/span&gt;
    &lt;span class="na"&gt;pods&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;metricName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;custom_metric_counter_total_by_pod&lt;/span&gt;
      &lt;span class="na"&gt;targetAverageValue&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that we have added the HPA manifest, we can make a new request against our API to increase the value of the counter back to “7”. Please keep in mind that the target value for the HPA was “3”. This means that the Horizotal Pod Autoscaler should scale our deployment to a total of three pods after a short amount of time. Let’s see what happens:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="nt"&gt;--request&lt;/span&gt; POST &lt;span class="s1"&gt;'http://&amp;lt;EXTERNAL_IP_OF_SERVICE&amp;gt;:4000/api/count'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--data-raw&lt;/span&gt; &lt;span class="s1"&gt;'{
    "count": 7
}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;How does the Grafana Dashboard look like after that request?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ErtwJaWd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/grafana_scale1.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ErtwJaWd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/grafana_scale1.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Grafana after setting the value of the counter to “7”&lt;/p&gt;

&lt;p&gt;Also, the Custom Metrics API reports a new value:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get &lt;span class="nt"&gt;--raw&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/custom_metric_counter_total_by_pod?pod=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get po &lt;span class="nt"&gt;-o&lt;/span&gt; name&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq

&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"kind"&lt;/span&gt;: &lt;span class="s2"&gt;"MetricValueList"&lt;/span&gt;,
  &lt;span class="s2"&gt;"apiVersion"&lt;/span&gt;: &lt;span class="s2"&gt;"custom.metrics.k8s.io/v1beta1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"metadata"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"selfLink"&lt;/span&gt;: &lt;span class="s2"&gt;"/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/custom_metric_counter_total_by_pod"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;,
  &lt;span class="s2"&gt;"items"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
    &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"describedObject"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"kind"&lt;/span&gt;: &lt;span class="s2"&gt;"Pod"&lt;/span&gt;,
        &lt;span class="s2"&gt;"namespace"&lt;/span&gt;: &lt;span class="s2"&gt;"default"&lt;/span&gt;,
        &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"promdemo-56946cb44-d5846"&lt;/span&gt;,
        &lt;span class="s2"&gt;"apiVersion"&lt;/span&gt;: &lt;span class="s2"&gt;"/v1"&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;,
      &lt;span class="s2"&gt;"metricName"&lt;/span&gt;: &lt;span class="s2"&gt;"custom_metric_counter_total_by_pod"&lt;/span&gt;,
      &lt;span class="s2"&gt;"timestamp"&lt;/span&gt;: &lt;span class="s2"&gt;"2020-05-26T12:38:58Z"&lt;/span&gt;,
      &lt;span class="s2"&gt;"value"&lt;/span&gt;: &lt;span class="s2"&gt;"7"&lt;/span&gt;,
      &lt;span class="s2"&gt;"selector"&lt;/span&gt;: null
    &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And last but not least, the Horizontal Pod Autoscaler does its job and scales the deployment the three pods! Hooray…&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get events
LAST SEEN   TYPE     REASON              OBJECT                                        MESSAGE

118s        Normal   ScalingReplicaSet   deployment/promdemo                           Scaled up replica &lt;span class="nb"&gt;set &lt;/span&gt;promdemo-56946cb44 to 3
118s        Normal   SuccessfulRescale   horizontalpodautoscaler/prometheus-demo-app   New size: 3&lt;span class="p"&gt;;&lt;/span&gt; reason: pods metric custom_metric_counter_total_by_pod above target
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0kFq09MV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/grafana_scale2.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0kFq09MV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/grafana_scale2.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Grafana showing three pods now!&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-Up
&lt;/h2&gt;

&lt;p&gt;So what did we see in this example? We first installed &lt;strong&gt;Prometheus&lt;/strong&gt; with the appropriate addons like &lt;strong&gt;Grafana&lt;/strong&gt; , Alert Manager (which we did not use in this example…). Then we added a &lt;strong&gt;custom metric&lt;/strong&gt; to an application and used Prometheus &lt;strong&gt;scraping&lt;/strong&gt; to retrieve it, so that it was available for evaluation within Prometheus. The next step was to install the &lt;strong&gt;Prometheus adapter&lt;/strong&gt; , which admittedly was a bit more complicated than expected. Finally, we created an Horizontal Pod Autoscaler in the cluster that used the custom metric to scale the pods of our deployment.&lt;/p&gt;

&lt;p&gt;All in all, it was quite an effort to scale a Kubernetes deplyoment based on custom metrics. In the next article, I will therefore talk about &lt;a href="https://keda.sh"&gt;KEDA&lt;/a&gt; (Kubernetes Event-Driven Autoscaling), which will make our lives – also regardings this example – much easier.&lt;/p&gt;

&lt;p&gt;You can find all the Kubernetes manifest files, Grafana dashboard configs and the script to generate the SSL certificate in this GitHub repo: &lt;a href="https://github.com/cdennig/k8s-custom-metrics"&gt;https://github.com/cdennig/k8s-custom-metrics&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>autoscale</category>
      <category>kubernetes</category>
      <category>prometheus</category>
      <category>custommetrics</category>
    </item>
    <item>
      <title>Horizontal Autoscaling in Kubernetes #1 – An Introduction</title>
      <dc:creator>Christian Dennig</dc:creator>
      <pubDate>Tue, 26 May 2020 13:11:13 +0000</pubDate>
      <link>https://forem.com/cdennig/horizontal-autoscaling-in-kubernetes-1-an-introduction-ab2</link>
      <guid>https://forem.com/cdennig/horizontal-autoscaling-in-kubernetes-1-an-introduction-ab2</guid>
      <description>&lt;p&gt;Kubernetes is taking the world of software development by storm and every company in the world feels tempted to develop their software on the platform or migrate existing solutions to it. There are a few basic principles to be considered when the application is finally operated productively in Kubernetes.&lt;/p&gt;

&lt;p&gt;One of them, is the implementation of a clean autoscaling. Kubernetes offers a number of options, especially when it comes to horizontally scaling your workloads running in the cluster.&lt;/p&gt;

&lt;p&gt;I will discuss the different options in a short series of blog posts. This article is about the introduction to the topic and the presentation of the “out-of-the-box” options of Kubernetes. In the second article I will discuss the possibilities of scaling deployments using custom metrics and how this can work in combination with popular monitoring tools like Prometheus. The last article then will be about the use of KEDA (Kubernetes Event-Driven Autoscaling) and looks at the field of “Event Driven” scaling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why?
&lt;/h2&gt;

&lt;p&gt;A good implementation of horizontal scaling is extremely important for applications running in Kubernetes. Load peaks can basically occur at any time of day, especially if the application is offered worldwide. Ideally, you have implemented measures that automatically respond to these load peaks so that no operator has to perform manual scaling (e.g. increase the number of pods in a deployment) – I mention this because this type of “automatic scaling” is still frequently seen in the field.&lt;/p&gt;

&lt;p&gt;Your app should be able to automatically deal with these challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When the resource demand &lt;strong&gt;increases&lt;/strong&gt; , your service(s) should be able to automatically &lt;strong&gt;scale up&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;When the demand &lt;strong&gt;decreases&lt;/strong&gt; , your service(s) should be able to &lt;strong&gt;scale down&lt;/strong&gt; (and save you some money!)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Options for scaling your workload
&lt;/h2&gt;

&lt;p&gt;As already mentioned, Kubernetes offers several options for horizontal scaling of services/pods. This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Load / CPU-based scaling&lt;/li&gt;
&lt;li&gt;(Custom) Metrics-based scaling&lt;/li&gt;
&lt;li&gt;Event-Driven scaling&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Recap: Kubernetes Objects
&lt;/h2&gt;

&lt;p&gt;Before I start with the actual topic, here is a short recap of the (basically) involved Kubernetes objects when services/applications are hosted in a cluster – just to put everyone on the same page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KcNi6f0H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/recap.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KcNi6f0H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/recap.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Recap: Kubernetes Deployments, RS, Pods etc.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pods&lt;/strong&gt; : smallest unit of compute/scale in Kubernetes. Hosts your application and &lt;em&gt;n&lt;/em&gt; additonal/„sidecar“ containers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Labels&lt;/strong&gt; : key/value pairs to identify workloads/kubernetes objects&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ReplicaSet&lt;/strong&gt; : is responsible for scaling your pods. Ensures that the desired number of pods are up and running&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment&lt;/strong&gt; : defines the desired state of a workload (number of pods, rollout process/update strategy etc.). Manages ReplicaSets.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you need more of a “refresh”, please head over to the &lt;a href="https://kubernetes.io/docs/home/"&gt;offical Kubernetes documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Horizontal Pod Autoscaler
&lt;/h2&gt;

&lt;p&gt;To avoid manual scaling, Kubernetes offers the concept of the “Horizontal Pod Autoscaler”, which many people have probably already heard of before. The &lt;a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/"&gt;Horizontal Pod Autoscaler&lt;/a&gt; (HPA) itself is a &lt;strong&gt;controller&lt;/strong&gt; and configured by &lt;em&gt;HorizontalPodAutoscaler&lt;/em&gt; resource objects. It is able to scale pods in a deployment/replicaset (or statefulset) based on obeserved metrics like e.g. CPU.&lt;/p&gt;

&lt;h3&gt;
  
  
  How does it work?
&lt;/h3&gt;

&lt;p&gt;The HPA works in combination with the Metrics server, which determines the corresponding metrics from running Pods and makes them available to the HPA via an API (Resource Metrics API). Based on this information, the HPA is able to make scaling decisions and scale up or down according to what you defined in terms of thresholds. Let’s have a look at a sample definition in YAML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;autoscaling/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HorizontalPodAutoscaler&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-hpa&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;maxReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20&lt;/span&gt;
  &lt;span class="na"&gt;minReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;scaleTargetRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-deployment&lt;/span&gt;
  &lt;span class="na"&gt;targetCPUUtilizationPercentage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;50&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The sample shown above means, that if the CPU utilization constantly surpasses 50% of the deployment (considering all pods!), the autoscaler is able to scale the number of pods in the deployment between 5 and 20.&lt;/p&gt;

&lt;p&gt;The algorithm of how the number of replicas is determined, works as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;replicas = ceil[currentReplicas \* ( currentValue / desiredValue )]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Here’s the whole process:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hRiroO9_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/hpa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hRiroO9_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/hpa.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pretty straightforward. So now, let’s have a look at a concrete sample.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sample
&lt;/h3&gt;

&lt;p&gt;Let’s see the Horizontal Pod Autoscaler in action. First, we need a workload as our scale target:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create ns hpa
namespace/hpa created
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl config set-context &lt;span class="nt"&gt;--current&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;hpa
Context &lt;span class="s2"&gt;"my-cluster"&lt;/span&gt; modified.
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl run nginx-worker &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--requests&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;cpu&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;200m &lt;span class="nt"&gt;--expose&lt;/span&gt; &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80
service/nginx-worker created
deployment.apps/nginx-worker created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As you can see above, we created a new namspace called &lt;em&gt;hpa&lt;/em&gt; and added a deployment of nginx pods (apparently only one at the moment – exposing port 80, to be able to fire some request against the pods from within the cluster). Next, create the horizontal pod autoscaler object.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl autoscale deployment nginx-worker &lt;span class="nt"&gt;--cpu-percent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5 &lt;span class="nt"&gt;--min&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nt"&gt;--max&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Let’s have a brief look at the YAML for the HPA:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;autoscaling/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HorizontalPodAutoscaler&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-worker&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;maxReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
  &lt;span class="na"&gt;minReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;scaleTargetRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-worker&lt;/span&gt;
  &lt;span class="na"&gt;targetCPUUtilizationPercentage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So now that everything is in place, we need some load on our nginx deployment. &lt;strong&gt;BTW&lt;/strong&gt; : You might wonder, why the target value is that low (only 5%!). Well, seems like the creators of NGINX have done a pretty good job. It’s really hard to “make NGINX sweat” ! :)&lt;/p&gt;

&lt;p&gt;Let’s simulate load! Create another pod (in this case “busybox”) in the same namespace, “exec into it” and use &lt;em&gt;wget&lt;/em&gt; to request the default page of NGINX in an endless loop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl run &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;--tty&lt;/span&gt; load-generator &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;busybox /bin/sh
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;wget &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="nt"&gt;-O-&lt;/span&gt; http://nginx-worker.hpa.svc&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;While this is running, open another terminal and “watch” the Horizontal Pod Autoscaler do his job.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W_xK4bcW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/hpa_inaction.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W_xK4bcW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2020/05/hpa_inaction.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;HPA in Action&lt;/p&gt;

&lt;p&gt;As you can see, after a short period of time, the HPA recognizes that there is load on our deployment and that the current value of our metric (CPU utilization) is above the target value (29% vs. 5%). It then starts to scale the pods within the deployment to an apropiate number, so that the utilization drops after a few seconds back to a value that is – more or less – within the defined range.&lt;/p&gt;

&lt;p&gt;After a certain time the wget loop was aborted, so that basically no more requests were sent to the pods. As you can see here, the autoscaler does not start removing pods immediately after that. It waits a given time (in this case 5min) until pods are killed and the deployment is set back to “replicas: 1”.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-Up
&lt;/h2&gt;

&lt;p&gt;In this first article, I discussed the basics of horizontal scaling in Kubernetes and how you can leverage CPU utilization – a standard metric in Kubernetes – to automatically scale your pods up and down. In the next article, I will show you how to use &lt;strong&gt;custom metrics&lt;/strong&gt; for scaling decisions – as you might guess, CPU is not always the best metric to decide if your service is under heavy load and needs more replicas to do the job.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>bestpractices</category>
      <category>basics</category>
      <category>scaling</category>
    </item>
    <item>
      <title>VS Code Git integration with ssh stops working</title>
      <dc:creator>Christian Dennig</dc:creator>
      <pubDate>Thu, 02 Jan 2020 09:53:23 +0000</pubDate>
      <link>https://forem.com/cdennig/vs-code-git-integration-with-ssh-stops-working-1mj0</link>
      <guid>https://forem.com/cdennig/vs-code-git-integration-with-ssh-stops-working-1mj0</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;I recently updated my &lt;em&gt;ssh&lt;/em&gt; keys for GitHub account on my Mac and after adding the public key in GitHub everything worked as expected &lt;em&gt;from the command line&lt;/em&gt;. When I did a…&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git pull
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;…I was asked for the passphrase of my private key and the pull was executed. Great.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git pull 
Enter passphrase for key '/Users/christiandennig/.ssh/id\_rsa': 
Already up to date.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But when I did the same from Visual Studio Code, I got the following error:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F01%2Fscreenshot-2020-01-02-at-10.19.18.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F01%2Fscreenshot-2020-01-02-at-10.19.18.png%3Fw%3D1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the &lt;em&gt;git&lt;/em&gt; logs, it says “Permission denied (publickey)”. This is odd, because VS Code is using the same &lt;em&gt;git&lt;/em&gt; executable as the commandline (see the log output).&lt;/p&gt;

&lt;p&gt;It seems that VS Code isn’t able to ask for the passphrase during the access to the git repo?!&lt;/p&gt;

&lt;h2&gt;
  
  
  ssh-agent FTW
&lt;/h2&gt;

&lt;p&gt;The solution is simple. Just use &lt;em&gt;ssh-agent&lt;/em&gt; on your machine to enter the passphrase for your &lt;em&gt;ssh&lt;/em&gt; private key once…all subsequent calls that need your &lt;em&gt;ssh&lt;/em&gt; key will use the saved passphrase.&lt;/p&gt;

&lt;p&gt;Add your key to &lt;em&gt;ssh-agent&lt;/em&gt; (storing the passphrase in &lt;strong&gt;MacOS Keychain&lt;/strong&gt;!)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh-add -K ~/.ssh/id\_rsa 
Enter passphrase for /Users/christiandennig/.ssh/id\_rsa: 
Identity added: /Users/christiandennig/.ssh/id\_rsa (XXXX@microsoft.com)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result in Keychain will look like that:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F01%2Fscreenshot-2020-01-02-at-10.34.28.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F01%2Fscreenshot-2020-01-02-at-10.34.28.png%3Fw%3D1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you now open Visual Studio Code and start to synchronize your git repository the calls to GitHub will use the credentials saved via &lt;em&gt;ssh-agent&lt;/em&gt; and all commands executed will be successful.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F01%2Fscreenshot-2020-01-02-at-10.37.12.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fheartofcode.files.wordpress.com%2F2020%2F01%2Fscreenshot-2020-01-02-at-10.37.12.png%3Fw%3D1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;HTH.&lt;/p&gt;

&lt;p&gt;(BTW: Happy New Year 🙂)&lt;/p&gt;

</description>
      <category>vscode</category>
      <category>git</category>
      <category>ssh</category>
      <category>errors</category>
    </item>
    <item>
      <title>Keep your AKS worker nodes up-to-date with kured</title>
      <dc:creator>Christian Dennig</dc:creator>
      <pubDate>Sat, 21 Dec 2019 10:01:54 +0000</pubDate>
      <link>https://forem.com/cdennig/keep-your-aks-worker-nodes-up-to-date-with-kured-1pan</link>
      <guid>https://forem.com/cdennig/keep-your-aks-worker-nodes-up-to-date-with-kured-1pan</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;When you are running several AKS / Kubernetes clusters in production, the process of keeping your application(s), their dependencies, Kubernetes itself with the worker nodes up to date, turns into a time-consuming task for (sometimes) more than one person. Looking at the worker nodes that form your AKS cluster, Microsoft will help you by applying the latest OS / security updates on a nightly basis. Great, but the downside is, when the worker node needs a restart to be able to fully apply these patches, Microsoft will not reboot that particular machine. The reason is obvious: they simply don’t know, when it is best to do so. So basically, you would have to do this on your own.&lt;/p&gt;

&lt;p&gt;Luckily, there is a project from &lt;a href="https://www.weave.works/"&gt;WeaveWorks&lt;/a&gt; called “&lt;a href="https://github.com/weaveworks/kured"&gt;Kubernetes Restart Daemon&lt;/a&gt;” or &lt;em&gt;kured&lt;/em&gt;, that gives you the ability to define a timeslot where it will be okay to automatically pull a node from your cluster and do a simple reboot on it.&lt;/p&gt;

&lt;p&gt;Under the hood, kured works by adding a DaemonSet to your cluster that watches for a reboot sentinel like e.g. the file &lt;em&gt;/var/run/reboot-required&lt;/em&gt;. If that file is present on a node, kured “cordons and drains” that particular node, inits a reboot and uncordons it afterwards. Of course, there are situations where you want to suppress that behavior and fortunately &lt;em&gt;kured&lt;/em&gt; gives us a few options to do so (Prometheus alerts or the presence of specific pods on a node…).&lt;/p&gt;

&lt;p&gt;So, let’s give it a try…&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation of kured
&lt;/h2&gt;

&lt;p&gt;I assume, you already have a running Kubernetes cluster, so we start by installing &lt;em&gt;kured&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f https://github.com/weaveworks/kured/releases/download/1.2.0/kured-1.2.0-dockerhub.yaml

clusterrole.rbac.authorization.k8s.io/kured created
clusterrolebinding.rbac.authorization.k8s.io/kured created
role.rbac.authorization.k8s.io/kured created
rolebinding.rbac.authorization.k8s.io/kured created
serviceaccount/kured created
daemonset.apps/kured created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Let’s have a look at what has been installed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -n kube-system -o wide | grep kured
kured-5rd66 1/1 Running 0 4m18s 10.244.1.6 aks-npstandard-11778863-vmss000001 &amp;lt;none&amp;gt; &amp;lt;none&amp;gt;
kured-g9nhc 1/1 Running 0 4m20s 10.244.2.5 aks-npstandard-11778863-vmss000000 &amp;lt;none&amp;gt; &amp;lt;none&amp;gt;
kured-vfzjk 1/1 Running 0 4m20s 10.244.0.10 aks-npstandard-11778863-vmss000002 &amp;lt;none&amp;gt; &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As you can see, we now have three kured pods running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Test kured
&lt;/h2&gt;

&lt;p&gt;To be able to test the installation, we simply simulate the “node reboot required” by creating the corresponding file on one of the worker nodes. We need to access a node by ssh. Just follow the official documentation on docs.microsoft.com:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/aks/ssh"&gt;https://docs.microsoft.com/en-us/azure/aks/ssh&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have access to a worker node via ssh, create the file via:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo touch /var/run/reboot-required
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now exit the pod, wait for the kured daemon to trigger a reboot and watch the cluster nodes by executing &lt;em&gt;kubectl get nodes -w&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get nodes -w
NAME STATUS ROLES AGE VERSION
aks-npstandard-11778863-vmss000000 Ready agent 34m v1.15.5
aks-npstandard-11778863-vmss000001 Ready agent 34m v1.15.5
aks-npstandard-11778863-vmss000002 Ready agent 35m v1.15.5
aks-npstandard-11778863-vmss000001 Ready agent 35m v1.15.5
aks-npstandard-11778863-vmss000000 Ready agent 35m v1.15.5
aks-npstandard-11778863-vmss000002 Ready,SchedulingDisabled agent 35m v1.15.5
aks-npstandard-11778863-vmss000002 Ready,SchedulingDisabled agent 35m v1.15.5
aks-npstandard-11778863-vmss000002 Ready,SchedulingDisabled agent 35m v1.15.5
aks-npstandard-11778863-vmss000001 Ready agent 36m v1.15.5
aks-npstandard-11778863-vmss000000 Ready agent 36m v1.15.5
aks-npstandard-11778863-vmss000002 NotReady,SchedulingDisabled agent 36m v1.15.5
aks-npstandard-11778863-vmss000002 NotReady,SchedulingDisabled agent 36m v1.15.5
aks-npstandard-11778863-vmss000002 Ready,SchedulingDisabled agent 36m v1.15.5
aks-npstandard-11778863-vmss000002 Ready,SchedulingDisabled agent 36m v1.15.5
aks-npstandard-11778863-vmss000002 Ready agent 36m v1.15.5
aks-npstandard-11778863-vmss000002 Ready agent 36m v1.15.5
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Corresponding output of the kured pod on that particular machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl logs -n kube-system kured-ngb5t -f
time="2019-12-23T12:39:25Z" level=info msg="Kubernetes Reboot Daemon: 1.2.0"
time="2019-12-23T12:39:25Z" level=info msg="Node ID: aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:39:25Z" level=info msg="Lock Annotation: kube-system/kured:weave.works/kured-node-lock"
time="2019-12-23T12:39:25Z" level=info msg="Reboot Sentinel: /var/run/reboot-required every 2m0s"
time="2019-12-23T12:39:25Z" level=info msg="Blocking Pod Selectors: []"
time="2019-12-23T12:39:30Z" level=info msg="Holding lock"
time="2019-12-23T12:39:30Z" level=info msg="Uncordoning node aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:39:31Z" level=info msg="node/aks-npstandard-11778863-vmss000000 uncordoned" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:39:31Z" level=info msg="Releasing lock"
time="2019-12-23T12:41:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:43:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:45:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:47:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:49:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:51:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:53:04Z" level=info msg="Reboot required"
time="2019-12-23T12:53:04Z" level=info msg="Acquired reboot lock"
time="2019-12-23T12:53:04Z" level=info msg="Draining node aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:53:06Z" level=info msg="node/aks-npstandard-11778863-vmss000000 cordoned" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:53:06Z" level=warning msg="WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: aks-ssh; Ignoring DaemonSet-managed pods: kube-proxy-7rhfs, kured-ngb5t" cmd=/usr/bin/kubectl std=err
time="2019-12-23T12:53:42Z" level=info msg="pod/aks-ssh evicted" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:53:42Z" level=info msg="node/aks-npstandard-11778863-vmss000000 evicted" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:53:42Z" level=info msg="Commanding reboot"
time="2019-12-23T12:53:42Z" level=info msg="Waiting for reboot"
...
...
&amp;lt;AFTER_THE_REBOOT&amp;gt;
...
...
time="2019-12-23T12:54:15Z" level=info msg="Kubernetes Reboot Daemon: 1.2.0"
time="2019-12-23T12:54:15Z" level=info msg="Node ID: aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:54:15Z" level=info msg="Lock Annotation: kube-system/kured:weave.works/kured-node-lock"
time="2019-12-23T12:54:15Z" level=info msg="Reboot Sentinel: /var/run/reboot-required every 2m0s"
time="2019-12-23T12:54:15Z" level=info msg="Blocking Pod Selectors: []"
time="2019-12-23T12:54:21Z" level=info msg="Holding lock"
time="2019-12-23T12:54:21Z" level=info msg="Uncordoning node aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:54:22Z" level=info msg="node/aks-npstandard-11778863-vmss000000 uncordoned" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:54:22Z" level=info msg="Releasing lock"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As you can see, the pods have been drained off the node (&lt;em&gt;SchedulingDisabled&lt;/em&gt;), which has then been successfully rebooted, uncordoned afterwards and is now ready to run pods again.&lt;/p&gt;

&lt;h2&gt;
  
  
  Customize kured Installation / Best Practices
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Reboot only on certain days/hours
&lt;/h3&gt;

&lt;p&gt;Of course, it is not always a good option to reboot your worker nodes during “office hours”. When you want to limit the timeslot where &lt;em&gt;kured&lt;/em&gt; is allowed to reboot your machines, you can make use of the following parameters during the installation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;reboot-days&lt;/em&gt; – the days kured is allowed to reboot a machine&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;start-time&lt;/em&gt; – reboot is possible after specified time&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;end-time&lt;/em&gt; – reboot is possible before specified time&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;time-zone&lt;/em&gt; – timezone for &lt;em&gt;start-time&lt;/em&gt;/&lt;em&gt;end-time&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Skip rebooting when certain pods are on a node
&lt;/h3&gt;

&lt;p&gt;Another option that is very useful regarding production workloads, is the possibility to skip a reboot, when certain pods are present on a node. The reason for that could be, that the service is very critical to your application and therefor pretty “expensive” when not available. You may want to surveil the process of rebooting such a node and being able to intervene quickly, if something goes wrong.&lt;/p&gt;

&lt;p&gt;As always in the Kubernetes environment, you can achieve this by using label selectors for &lt;em&gt;kured&lt;/em&gt; – an option set during installation called &lt;em&gt;blocking-pod-selector&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Notify via WebHook
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;kured&lt;/em&gt; also offers the possibility to call a Slack webhook when nodes are about to be rebooted. Well, we can “misuse” that webhook to trigger our own action, because such a webhook is just a simple https POST with a predefined body, e.g.:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ "text": "Rebooting node aks-npstandard-11778863-vmss000000", "username": "kured" }
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To be as flexible as possible, we leverage the 200+ Azure Logic Apps connectors that are currently available to basically do anything we want. In the current sample, we want to receive a Teams notification to a certain team/channel and send a mail to our Kubernetes admins whenever &lt;em&gt;kured&lt;/em&gt; triggers an action.&lt;/p&gt;

&lt;p&gt;You can find the important parts of the sample Logic App on my &lt;a href="https://gist.github.com/cdennig/8255f5812f8c7006e5dc1b042c1eaeff"&gt;GitHub account&lt;/a&gt;. Here is a basic overview of it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hyl16fCX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2019/12/logicapp.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hyl16fCX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2019/12/logicapp.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Azure Logic App&lt;/p&gt;

&lt;p&gt;What you basically have to do is to create an Azure Logic App with a &lt;em&gt;http trigger&lt;/em&gt;, parse the JSON body of the POST request and trigger “Send Email” and “Post a Teams Message” actions. When you save the Logic App for the first time, the webhook endpoint will be generated for you. Take that URL and use the value for the &lt;em&gt;slack-hook-url&lt;/em&gt; parameter during the installation of &lt;em&gt;kured.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you need more information on creating an Azure Logic App, please see the official documentation: &lt;a href="https://docs.microsoft.com/en-us/azure/connectors/connectors-native-reqres"&gt;https://docs.microsoft.com/en-us/azure/connectors/connectors-native-reqres&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When everything is setup, Teams notifications and emails you receive, will look like that:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M-E2wFWT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2019/12/teams-1.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M-E2wFWT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2019/12/teams-1.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Teams Notification&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hnJeab8L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2019/12/mail-1.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hnJeab8L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://heartofcode.files.wordpress.com/2019/12/mail-1.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;Mail Integration&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-Up
&lt;/h2&gt;

&lt;p&gt;In this sample, we got to know the Kubernetes Reboot Daemon that helps you keep your AKS cluster up to date by simply specifying a timeslot where the daemon is allowed to reboot you cluster/worker nodes and apply security patches to the underlying OS. We also saw, how you can make use of the “Slack” webhook feature to basically do anything you want with &lt;em&gt;kured&lt;/em&gt; notifications by using Azure Logic Apps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; if you have a huge cluster, you should think about running multiple DaemonSets where each of them is responsible for certain nodes/nodepools. It is pretty easy to set this up, just by using Kubernetes node affinities.&lt;/p&gt;

</description>
      <category>kured</category>
      <category>azure</category>
      <category>kubernetes</category>
      <category>aks</category>
    </item>
  </channel>
</rss>
