<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Otomato</title>
    <description>The latest articles on Forem by Otomato (@otomato_io).</description>
    <link>https://forem.com/otomato_io</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/otomato_io"/>
    <language>en</language>
    <item>
      <title>Why You Need an Internal Developer Portal NOW!!!</title>
      <dc:creator>otomato</dc:creator>
      <pubDate>Sun, 21 May 2023 08:51:52 +0000</pubDate>
      <link>https://forem.com/otomato_io/why-you-need-an-internal-developer-portal-now-2cd5</link>
      <guid>https://forem.com/otomato_io/why-you-need-an-internal-developer-portal-now-2cd5</guid>
      <description>&lt;p&gt;IDPs are all the buzz now. But they aren't just buzz - they are an actual blessing for the organizations that get them right. Our CEO Anton describes why in his new &lt;a href="https://dev.to/antweiss/why-you-need-an-internal-developer-portal-now-35l9"&gt;post&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>DevOpsCon Munich 2022 - Human Interactions that Matter</title>
      <dc:creator>Ant(on) Weiss</dc:creator>
      <pubDate>Fri, 09 Dec 2022 08:50:00 +0000</pubDate>
      <link>https://forem.com/otomato_io/devopscon-munich-2022-human-interactions-that-matter-1953</link>
      <guid>https://forem.com/otomato_io/devopscon-munich-2022-human-interactions-that-matter-1953</guid>
      <description>&lt;p&gt;I'm at the Munich airport waiting for my flight home.&lt;br&gt;
This is a cold, gloomy morning - a perfect time to introspect and retrospect.&lt;/p&gt;

&lt;p&gt;The conference had the level of quality I've learned to expect from S&amp;amp;S Media. They organize a few international lines of events - DevOpsCon, Jax, Serverless Conference, etc. and they all rock. The Munich conference was 4 days total, but I only arrived on the 3rd day - to give my talk called  "Resilience in Engineering... and Life" and then do my part in the advanced CI/CD workshop.&lt;/p&gt;

&lt;p&gt;The talk was an experiment and a challenge I've imposed on myself. During the COVID-19 era I got so fed up with virtual talks and so hungry for true human interaction that I found myself unable to attend and enjoy even the regular conference talks. You know - those where the speaker tries to put on a show on stage while half the audience are on their phones and laptops.&lt;/p&gt;

&lt;p&gt;So instead of just writing and rehearsing a talk as I would usually do - I decided to have an open conversation with the audience about resilience - technical and psychological. After all - resilience engineering is about what we do to continue operating in expected as well as unexpected conditions. So I've opened the tap of unexpected events and drank from it. And - from the get-go I encouraged my audience to get up close and personal and engage. And it all clicked! We talked about burnout and observability, about HA and emotional support, about entropy and growth. And how it's all components of one big socio-technical system. Probably my most important conference talk until now.&lt;/p&gt;

&lt;p&gt;That night Sebastian Meyen - the chief content officer at S&amp;amp;S Media took the speakers out and I had a great, deep conversation with Zbynek Roubalik - one of the maintainers of both &lt;a href="https://knative.dev/" rel="noopener noreferrer"&gt;Knative&lt;/a&gt; and &lt;a href="https://keda.sh/" rel="noopener noreferrer"&gt;KEDA&lt;/a&gt;. He got me all excited about the GPTchat. I even tried to play with it that same night when I came  back to the hotel. But I got bored after 20 minutes. It still feels like talking to a machine... I don't see the threat that everybody seems to talk about. I also still don't see the value it provides. Which for me would be the reduction of toil that goes into coding and content creation. But I'm optimistic - there's field for improvement!&lt;/p&gt;

&lt;p&gt;And then came the workshop day. Thanks to Nir Koren for throwing this together. My part was about Progressive Delivery with Argo Rollouts. I believe I did a great job presenting the concept and the technology - complete with Traefik Ingress and Prometheus integration. Folks came up to me to say they enjoyed it and learned from it - what more do I need?&lt;/p&gt;

&lt;p&gt;And again - the best part was the panel discussion on CI/CD topics that we did in the end. So again - there's nothing like open human conversation for making work better. If you're organizing a conference - take note. It's the human interactions that will make your event memorable and enjoyable.&lt;/p&gt;

&lt;p&gt;Next week I'm teaching 2 Kubernetes classes. More human interactions for me. Life is great and full of meaning when you're focused on humans. That's how I always try to work. I don't always succeed, but I continue learning.&lt;/p&gt;

&lt;p&gt;And I would like to finish this post with a great quote from Joseph Campbell which I recently found in Brene Brown's book:&lt;/p&gt;

&lt;p&gt;“If you can see your path laid out in front of you step by step, you know it's not your path. Your own path you make with every step you take. That's why it's your path.”&lt;/p&gt;

</description>
      <category>conference</category>
      <category>devopscon</category>
      <category>argo</category>
      <category>resilience</category>
    </item>
    <item>
      <title>Liveness Probes: Feel the Pulse of the App</title>
      <dc:creator>Roman Belshevitz</dc:creator>
      <pubDate>Mon, 28 Nov 2022 13:30:16 +0000</pubDate>
      <link>https://forem.com/otomato_io/liveness-probes-feel-the-pulse-of-the-app-133e</link>
      <guid>https://forem.com/otomato_io/liveness-probes-feel-the-pulse-of-the-app-133e</guid>
      <description>&lt;p&gt;This article will provide some helpful examples as the author  examines probes in Kubernetes. A correct probe definition can increase pod availability and resilience!&lt;/p&gt;

&lt;h2&gt;
  
  
  A Kubernetes Liveness Probe: What Is It?
&lt;/h2&gt;

&lt;p&gt;Based on a given test, the Liveness probe makes sure that an application inside a container is active and working.&lt;/p&gt;

&lt;h3&gt;
  
  
  ⚙️ Liveness probes
&lt;/h3&gt;

&lt;p&gt;They are used by the &lt;code&gt;kubelet&lt;/code&gt; to determine when to restart a container. Applications that crash or enter broken states are detected and, in many cases, can be rectified by restarting them.&lt;/p&gt;

&lt;p&gt;A successful configuration of the liveness probe results in no action being taken and no logs being kept. If it fails, the event is recorded, and the container is killed by the &lt;code&gt;kubelet&lt;/code&gt; in accordance with the &lt;code&gt;restartPolicy&lt;/code&gt; settings.&lt;/p&gt;

&lt;p&gt;When a pod might seem to be running, but the application might not be working properly, a liveness probe should be utilized. During a standstill, as an illustration. The pod might be operational, but it is ineffective since it cannot handle traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fik5g939olte88z1gpa24.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fik5g939olte88z1gpa24.png" alt=" " width="800" height="515"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;🖼️ Pic source: K21Academy&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Since the &lt;code&gt;kubelet&lt;/code&gt; will check the &lt;code&gt;restartPolicy&lt;/code&gt; and restart the container automatically if it is set to &lt;code&gt;Always&lt;/code&gt; or &lt;code&gt;OnFailure&lt;/code&gt;, they are not required when the application is configured to crash the container on failure. The NGINX application, &lt;a href="https://serverfault.com/questions/1003361/how-to-automatically-restart-nginx-when-it-goes-down" rel="noopener noreferrer"&gt;for example&lt;/a&gt;, launches rapidly and shuts down if it encounters a problem that prevents it from serving pages. You are not in need of a liveness inquiry in this instance.&lt;/p&gt;

&lt;p&gt;There are common adjustable fields for every type of probe:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;initialDelaySeconds&lt;/code&gt;: Probes start running after initialDelaySeconds after container is started (default: 0)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;periodSeconds&lt;/code&gt;: How often probe should run (default: 10)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;timeoutSeconds&lt;/code&gt;: Probe timeout (default: 1)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;successThreshold&lt;/code&gt;: Required number of successful probes to mark container healthy/ready (default: 1)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;failureThreshold&lt;/code&gt;: When a probe fails, it will try failureThreshold times before deeming unhealthy/not ready (default: 3)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;periodSeconds&lt;/code&gt; field in each of the examples below says that the &lt;code&gt;kubelet&lt;/code&gt; should run a liveness probe every 5 seconds. The &lt;code&gt;initialDelaySeconds&lt;/code&gt; field instructs the &lt;code&gt;kubelet&lt;/code&gt; to delay the first probe for 5 seconds.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;timeoutSeconds&lt;/code&gt; option (Time to wait for the reply), &lt;code&gt;successThreshold&lt;/code&gt; (Number of successful probe executions to mark the container healthy), and &lt;code&gt;failiureThreshold&lt;/code&gt; (Number of failed probe executions to mark the container unhealthy), among other options, can also be customized, if desired.&lt;/p&gt;

&lt;p&gt;All different liveness probes can use these five parameters.&lt;/p&gt;
&lt;h2&gt;
  
  
  What other Kubernetes probes are available?
&lt;/h2&gt;

&lt;p&gt;Although the use of Liveness probes will be the main emphasis of this article, you should be aware that Kubernetes also supports the following other types of probes:&lt;/p&gt;
&lt;h3&gt;
  
  
  ⚙️ Startup probes
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;kubelet&lt;/code&gt; uses startup probes to help it determine when a container application has begun. When enabled, these make sure startup probes don't obstruct the application startup by disabling liveness and readiness checks until they are successful.&lt;/p&gt;

&lt;p&gt;These are especially helpful for slow-starting containers since they prevent the &lt;code&gt;kubelet&lt;/code&gt; from killing them before they have even started when a liveness probe fails. Set the startup probe's &lt;code&gt;failureThreshold&lt;/code&gt; greater if liveness probes are used on the same endpoint in order to enable lengthy startup periods.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-api-deployment&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-api&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-api&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myrepo/test-api:0.1&lt;/span&gt;
        &lt;span class="na"&gt;startupProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/health/startup&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
          &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When a Pod starts and the probe fails, Kubernetes will try &lt;code&gt;failureThreshold&lt;/code&gt; times before giving up. Giving up in case of liveness probe means restarting the Pod. In case of readiness probe, the Pod will be marked &lt;code&gt;Unready&lt;/code&gt;. Defaults to &lt;code&gt;3&lt;/code&gt;. The minimum value is &lt;code&gt;1&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Some startup probe's math: why it is important?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;0 - 10 s: the container has been spun up but the &lt;code&gt;kubelet&lt;/code&gt; doesn't do anything waiting for the &lt;code&gt;initalDelaySeconds&lt;/code&gt; to pass&lt;/li&gt;
&lt;li&gt;10 - 20 s: the first probe request is sent but no response is sent back, this is because the app hasn’t stood up the APIs yet, this is either a failure due to 2 seconds timeout or an immediate TCP connection error&lt;/li&gt;
&lt;li&gt;20 - 30 s: the app has got up but has only started fetching credentials, configurations and so on, so the response to the probe request is 5xx&lt;/li&gt;
&lt;li&gt;30 - 210 s: the kubelet has been probing but the success response didn’t come and is reaching the limit set by the &lt;code&gt;failureThreshold&lt;/code&gt;. In this case, as per the deployment configuration for the startup probe, the pod will be restarted after roughly 212 seconds.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1cficnbhlf0oqjn15ak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1cficnbhlf0oqjn15ak.png" alt=" " width="800" height="305"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;🖼️ Pic source: Wojciech Sierakowski (HMH Engineering)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It might be a little excessive to wait more than 3 minutes for the app to launch locally with faked dependencies!&lt;/p&gt;

&lt;p&gt;🎯 It might be also better to shorten this interval if you are absolutely certain that, for example, reading secrets, credentials, and establishing connections with DBs and other data sources shouldn't take so long. Doing so will slow down the deployment speed.&lt;/p&gt;

&lt;p&gt;Maybe it’s important to figure out if you even need more nodes. You don’t want to waste your money on resources you don’t need. Take a look at &lt;code&gt;kubectl top&lt;/code&gt; nodes to see if you even need to scale the nodes.&lt;/p&gt;

&lt;p&gt;🚧 If probe fails, the event is recorded, and the container is killed by the &lt;code&gt;kubelet&lt;/code&gt; in accordance with the &lt;code&gt;restartPolicy&lt;/code&gt; settings.&lt;/p&gt;

&lt;p&gt;When a container gets restarted you usually want to check the logs why the application went unhealthy. You can do this with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl logs &amp;lt;pod-name&amp;gt; --previous
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ⚙️ Readiness probes
&lt;/h3&gt;

&lt;p&gt;Readiness probes keep track of the application's availability. No traffic will be forwarded to the pod if it fails. These are employed when an application requires configuration before it is usable. Additionally, an application may experience traffic congestion and cause the probe to malfunction, stopping further traffic from being routed to it and allowing it to recover. The endpoints controller takes the pod out if it fails.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-api-deployment&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-api&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-api&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myrepo/test-api:0.1&lt;/span&gt;
        &lt;span class="na"&gt;readinessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/ready&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
          &lt;span class="na"&gt;successThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;kubelet&lt;/code&gt; finds that the container is not yet prepared to receive network traffic, but is making progress in that direction if the readiness probe fails but the liveness probe succeeds.&lt;/p&gt;

&lt;h2&gt;
  
  
  The operation of Kubernetes probes
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;kubelet&lt;/code&gt; controls the probes. The main "node agent" that executes on each node is the &lt;code&gt;kubelet&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrzal6zufqn9o5ivc2as.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrzal6zufqn9o5ivc2as.png" alt=" " width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;🖼️ Pic source: Andrew Lock (Datadog). SVG is &lt;a href="https://andrewlock.net/content/images/2020/k8s_probes.svg" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The application needs to support one of the following handlers in order to use a K8S probe effectively:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ExecAction&lt;/code&gt; handler: Executes a command inside the container. If the command returns a status code of &lt;code&gt;0&lt;/code&gt;, the diagnosis is successful.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TCPSocketAction&lt;/code&gt; handler tries to establish a TCP connection to the pod's IP address on a particular port. If the port is discovered to be open, the diagnostic is successful.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using the IP address of the pod, a particular port, and a predetermined destination, the &lt;code&gt;HTTPGetAction&lt;/code&gt; handler sends an &lt;code&gt;HTTP GET&lt;/code&gt; request. If the response code given falls between &lt;code&gt;200&lt;/code&gt; and &lt;code&gt;399&lt;/code&gt;, the diagnostic is successful.&lt;/p&gt;

&lt;p&gt;Before version 1.24 Kubernetes did not support gRPC health checks natively. This left the gRPC developers with the following three approaches when they deploy to Kubernetes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2d3lx0swx49kxm1qjn54.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2d3lx0swx49kxm1qjn54.png" alt=" " width="800" height="286"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;🖼️ Pic source: Ahmet Alp Balkan (Twitter, ex-Google)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As of Kubernetes version 1.24, gRPC handler &lt;a href="https://kubernetes.io/blog/2022/05/13/grpc-probes-now-in-beta/" rel="noopener noreferrer"&gt;can be configured&lt;/a&gt; to be used by &lt;code&gt;kubelet&lt;/code&gt; for application liveness checks if your application implements the gRPC Health Checking Protocol. To configure checks that use gRPC, you must enable the &lt;code&gt;GRPCContainerProbe&lt;/code&gt; &lt;a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="noopener noreferrer"&gt;feature gate&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;When the &lt;code&gt;kubelet&lt;/code&gt; conducts a probe on a container, it answers with &lt;code&gt;Success&lt;/code&gt;, &lt;code&gt;Failure&lt;/code&gt;, or &lt;code&gt;Unknown&lt;/code&gt;, depending on whether the diagnostic was successful, unsuccessful, or incomplete for some other reason.&lt;/p&gt;
&lt;h2&gt;
  
  
  So, how rushy to track the pulse?
&lt;/h2&gt;

&lt;p&gt;You should examine the system behavior and typical starting timings of the pod and its containers before defining a probe so that you can choose the appropriate thresholds. Additionally, as the infrastructure or application changes, the probe choices should be changed. For instance, a pod's configuration to use more system resources can have an impact on the values that need to be configured for the probes.&lt;/p&gt;
&lt;h2&gt;
  
  
  Handlers in action: some examples
&lt;/h2&gt;
&lt;h3&gt;
  
  
  &lt;code&gt;ExecAction&lt;/code&gt; handler: how can it be useful in practice?
&lt;/h3&gt;

&lt;p&gt;🎯 It allows you to use commands inside containers to control the status of life of a counter in pods. With the help of this option, you may examine several aspects of container's operation, such as the existence of files, their contents, and other choices (accessible at the command level).&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ExecAction&lt;/code&gt; is executed in pod’s shell context and is deemed  failed if the execution returns any result code different from &lt;code&gt;0&lt;/code&gt; (zero).&lt;/p&gt;

&lt;p&gt;The example below demonstrates how to use the &lt;code&gt;exec&lt;/code&gt; command with the cat command to see if a file exists at the path &lt;code&gt;/usr/share/liveness/html/index.html&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;liveness&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;liveness-exec&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;liveness&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry.k8s.io/liveness:0.1&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
    &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;exec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cat&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/usr/share/liveness/html/index.html&lt;/span&gt;
      &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
      &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🚧 The container will be restarted if there is no file and the liveness probe will fail.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;TCPSocketAction&lt;/code&gt; handler: how can it be useful in practice?
&lt;/h3&gt;

&lt;p&gt;In this use case, the liveness probe makes use of the TCP handler to determine whether port &lt;code&gt;8080&lt;/code&gt; is active and open. With this configuration, your container will try to connect to the &lt;code&gt;kubelet&lt;/code&gt; by opening a socket on the designated port.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;liveness&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;liveness-tcp&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;liveness&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry.k8s.io/liveness:0.1&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
    &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;tcpSocket&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
      &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
      &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🚧 The container will restart if the socket is dead and liveness probe fails.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;HTTPGetAction&lt;/code&gt; handler: how can it be useful in practice?
&lt;/h3&gt;

&lt;p&gt;This case demonstrates the HTTP handler that will send an HTTP GET request to the &lt;code&gt;/health&lt;/code&gt; path on port &lt;code&gt;8080&lt;/code&gt;. A value between &lt;code&gt;200&lt;/code&gt; and &lt;code&gt;400&lt;/code&gt; indicates that the probe was successful.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;liveness&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;liveness-http&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;liveness&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry.k8s.io/liveness:0.1&lt;/span&gt;
    &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/health&lt;/span&gt;
        &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
        &lt;span class="na"&gt;httpHeaders&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Custom-Header&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ItsAlive&lt;/span&gt;
      &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
      &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🚧 The probe fails, and the container is restarted if a code outside this range is received. Any custom headers you want to transmit can be defined using the &lt;code&gt;httpHeaders&lt;/code&gt; option.&lt;/p&gt;

&lt;h3&gt;
  
  
  gRPC handler: how can it be useful in practice?
&lt;/h3&gt;

&lt;p&gt;gRPC protocol is on its way to becoming the &lt;em&gt;lingua franca&lt;/em&gt; for communication between cloud-native microservices. If you are deploying gRPC applications to Kubernetes today, you may be wondering about the best way &lt;a href="https://github.com/grpc/grpc/blob/master/doc/health-checking.md" rel="noopener noreferrer"&gt;to configure health checks&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This example demonstrates how to check port &lt;code&gt;2379&lt;/code&gt; responsiveness using the gRPC health checking protocol. A port must be specified in order to use a gRPC probe. You must also specify the service if the &lt;a href="https://kubernetes.io/docs/reference/using-api/health-checks/" rel="noopener noreferrer"&gt;health endpoint&lt;/a&gt; is set up on a non-default service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;liveness&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;liveness-gRPC&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;liveness&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry.k8s.io/liveness:0.1&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2379&lt;/span&gt;
    &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;grpc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2379&lt;/span&gt;
      &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
      &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🚧 The container will restart if the gRPC socket is dead and liveness probe fails.&lt;/p&gt;

&lt;p&gt;Since there are no &lt;a href="https://grpc.github.io/grpc/core/md_doc_statuscodes.html" rel="noopener noreferrer"&gt;error codes&lt;/a&gt; for gRPC built-in probes, all errors are regarded as probe failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using liveness probes in the wrong way can lead to disaster
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Please remember that the container will be restarted if the liveness probe fails. It is not conventional to examine dependencies in a liveness probe, unlike a readiness probe. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To determine whether the container itself has stopped responding, a liveness probe should be utilized.&lt;/p&gt;

&lt;p&gt;A liveness probe has the drawback of maybe not verifying the service's responsiveness. For instance, if a service maintains two web servers, one for service routes and the other for status routes, such as readiness and liveness probes or metrics gathering, the service may be delayed or inaccessible while the liveness probe route responds without any issues. The liveness probe must use the service in a comparable way to dependent services for it to be effective.&lt;/p&gt;

&lt;p&gt;Like the readiness probe, it's crucial to take into account dynamics that change over time. A slight increase in response time, possibly brought on by a brief rise in load, could force the container to restart if the liveness-probe timeout is too short. The restart might put even more strain on the other pods supporting the service, leading to a further cascade of liveness probe failures and worsening the service's overall availability. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fox7ypl60h7ocwj3csckv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fox7ypl60h7ocwj3csckv.png" alt=" " width="720" height="483"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;🖼️ Pic source: Wojciech Sierakowski (HMH Engineering)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;These cascade failures can be prevented by configuring liveness probe timeouts on the order of client timeouts and employing a forgiving &lt;code&gt;failureThreshold&lt;/code&gt; count.&lt;/p&gt;

&lt;p&gt;Liveness probes may have a small issue with the container startup latency varying over time (see above about the math). Changes in resource allocation, network topology changes, or just rising load as your service grows could all contribute to this. &lt;/p&gt;

&lt;p&gt;If the &lt;code&gt;initialDelaySeconds&lt;/code&gt; option is insufficient and a container is restarted as a result of a Kubernetes node failure or a liveness probe failure, the application may never start or may start partially before being repeatedly destroyed and restarted. The container's maximum initialization time should be greater than the &lt;code&gt;initialDelaySeconds&lt;/code&gt; option. &lt;/p&gt;

&lt;h2&gt;
  
  
  Some notable suggestions are:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Keep dependencies out of liveness probes. Liveness probes should be reasonably priced and have consistent response times.&lt;/li&gt;
&lt;li&gt;So that system dynamics can alter temporarily or permanently without causing an excessive number of liveness probe failures, liveness probe timeouts should be conservatively set. Consider setting client timeouts and liveness-probe timeouts to the same value.&lt;/li&gt;
&lt;li&gt;To ensure that containers can be restarted with reliability even if starting dynamics vary over time, the initialDelaySeconds option should be set conservatively.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The inevitable summary
&lt;/h2&gt;

&lt;p&gt;By causing an automatic restart of a container after a failure of a particular test is discovered, the proper integration of liveness probes with readiness and startup probes can increase pod resilience and availability. It is necessary to comprehend the application in order to specify the appropriate alternatives for them.&lt;/p&gt;

&lt;p&gt;The author is thankful to Guy Menachem from Komodor for inspiration! Stable applications in the clouds to you all,  folks!&lt;/p&gt;

&lt;h3&gt;
  
  
  More to read:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Traefik &lt;a href="https://doc.traefik.io/traefik/user-guides/grpc/#grpc-examples" rel="noopener noreferrer"&gt;docs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Kubernetes &lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core" rel="noopener noreferrer"&gt;API reference&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Guy's &lt;a href="https://komodor.com/blog/kubernetes-health-checks-everything-you-need-to-know/" rel="noopener noreferrer"&gt;post&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>api</category>
    </item>
    <item>
      <title>Kubernetes TLS, Demystified</title>
      <dc:creator>Roman Belshevitz</dc:creator>
      <pubDate>Tue, 11 Oct 2022 18:16:13 +0000</pubDate>
      <link>https://forem.com/otomato_io/possible-paths-2hfc</link>
      <guid>https://forem.com/otomato_io/possible-paths-2hfc</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This is the anniversary 10th article in this series.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;🛡️ It is more than obvious a secured connection to any exposed service running in Kubernetes cluster is important. &lt;/p&gt;

&lt;p&gt;The supposition for this article is that you wish to set up TLS (Transport Layer Security) realm for your &lt;a href="https://docs.nginx.com/nginx-ingress-controller/" rel="noopener noreferrer"&gt;ingress resource&lt;/a&gt; and that you already have a functioning ingress controller established in your cluster.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The SSL replacement technology is called Transport Layer Security (TLS) today. An enhanced version of SSL is TLS. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Similar to how SSL operates, it uses encryption to safeguard the transmission of data and information. Although SSL is still commonly utilized in the industry, the &lt;em&gt;two names are frequently used interchangeably&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting a certificate: what paths can be taken?
&lt;/h2&gt;

&lt;p&gt;A TLS/SSL certificate is the fundamental prerequisite for ingress TLS. These certificates are available to you in the following ways.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Path one&lt;/strong&gt;. Self-signed certificates: Our own Certificate Authority (root CA) &lt;a href="https://www.ibm.com/docs/en/api-connect/10.0.1.x?topic=overview-generating-self-signed-certificate-using-openssl" rel="noopener noreferrer"&gt;created and signed&lt;/a&gt; the TLS certificate. It is a well-known, stunt choice for &lt;em&gt;testing scenarios&lt;/em&gt; where you can "collaborate" on the root CA so that browsers will accept the certificate. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8ol9xl8v4idbbuh8ahf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8ol9xl8v4idbbuh8ahf.png" alt=" " width="450" height="310"&gt;&lt;/a&gt;&lt;br&gt;
🖼️ &lt;em&gt;Pic source: Bizagi&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Path two&lt;/strong&gt;. Get an SSL certificate: for production use cases, you must &lt;a href="https://www.google.com/search?q=buy+ssl+certificate" rel="noopener noreferrer"&gt;purchase&lt;/a&gt; an SSL certificate from a reputable certificate authority that operating systems and browsers trust. But you must bear in mind that a so-called &lt;em&gt;wildcard certificate&lt;/em&gt; suitable to protect all subdomains in a domain can cost $300+/year from major commercial issuers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Path three&lt;/strong&gt;. Use a Let's Encrypt certificate: Let's Encrypt is a reputable certificate authority that issues &lt;em&gt;free&lt;/em&gt; TLS certificates. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  A few words about Let's Encrypt
&lt;/h3&gt;

&lt;p&gt;It is a non-profit organization founded by &lt;a href="https://letsencrypt.org/2022/09/12/remembering-peter-eckersley.html" rel="noopener noreferrer"&gt;enthusiasts&lt;/a&gt; in the field of struggle for privacy and security in 2014. &lt;/p&gt;

&lt;p&gt;The challenge–response protocol used to automate enrolling with the certificate authority is called Automated Certificate Management Environment (&lt;a href="https://letsencrypt.org/how-it-works/" rel="noopener noreferrer"&gt;ACME&lt;/a&gt;). It can query either Web servers or DNS servers controlled by the domain covered by the certificate to be issued.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzzu0229x0x0tye8d3fk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzzu0229x0x0tye8d3fk.png" alt=" " width="689" height="380"&gt;&lt;/a&gt;&lt;br&gt;
🖼️ &lt;em&gt;Pic source: Let's Encrypt&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you are interested in the implementation of the protocol, read what &lt;a href="https://blog.acolyer.org/2020/02/12/lets-encrypt-an-automated-certificate-authority-to-encrypt-the-entire-web/" rel="noopener noreferrer"&gt;Adrian Colyer&lt;/a&gt; from SpringSource writes about them.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Each SSL certificate has an &lt;em&gt;expiration date&lt;/em&gt;. So, before the certificate expires, you need to &lt;em&gt;rotate&lt;/em&gt; it. For instance, Let's Ecrypt certificates have a &lt;em&gt;three-month&lt;/em&gt; expiration date (and &lt;a href="https://letsencrypt.org/2015/11/09/why-90-days.html" rel="noopener noreferrer"&gt;here&lt;/a&gt; they tell why).&lt;/p&gt;

&lt;p&gt;This way, below in this article series, the author will dwell on the &lt;strong&gt;third path&lt;/strong&gt; further in detail. Why? Well, since this path is interesting for its relative self-sufficiency and [relative] independence from commercial / state-owned certificate issuers. In general, the motto of this path is: "If made something with your hands, you know how it works - you're more adapted to survival!"&lt;/p&gt;

&lt;p&gt;Of course, Let's Encrypt approach does not &lt;em&gt;always&lt;/em&gt; fit needs, but for academic purposes and for startups it works.&lt;/p&gt;

&lt;p&gt;But let's look at the situation "in manual mode" - just try to associate a certificate with a protected application. So, &lt;/p&gt;
&lt;h3&gt;
  
  
  Chicken or egg?
&lt;/h3&gt;

&lt;p&gt;The &lt;em&gt;ingress controller&lt;/em&gt;, not the ingress resource, is in charge of SSL. In other words, the ingress controller &lt;em&gt;accesses&lt;/em&gt; the TLS certificates you provide to the ingress resource as a Kubernetes &lt;code&gt;secret&lt;/code&gt; and incorporates them into its configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs68iypo5revwa9w665am.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs68iypo5revwa9w665am.png" alt=" " width="768" height="467"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Setup TLS/SSL certificates for ingress
&lt;/h2&gt;

&lt;p&gt;Let's examine the procedures for setting up TLS for ingress. We'll start by launching a test application on the cluster. This application will be used to test our TLS-secured ingress.&lt;/p&gt;

&lt;p&gt;Establish the new namespace, &lt;code&gt;trial&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace trial
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Keep this as &lt;code&gt;hello-app.yaml&lt;/code&gt;. The &lt;code&gt;Deployment&lt;/code&gt; and &lt;code&gt;Service&lt;/code&gt; objects are present.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-app&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;trial&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rbalashevich/hello-app:2.0"&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-service&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;trial&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy the application with a command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f hello-app.yaml -n trial
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create a Kubernetes TLS Secret
&lt;/h3&gt;

&lt;p&gt;It is necessary to make the SSL certificate a Kubernetes secret. It will subsequently be directed to the &lt;code&gt;tls&lt;/code&gt; block for &lt;code&gt;Ingress&lt;/code&gt; resources.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;server.crt&lt;/code&gt; (CA trust chain) and &lt;code&gt;server.key&lt;/code&gt; (private key) SSL files are assumed to be available from a Certificate Authority, your company, or self-signed, as a last resort.&lt;/p&gt;

&lt;p&gt;⚠️ A private key is created by you (the certificate owner) when you request your certificate with a Certificate Signing Request (CSR). Saying other words, you receive a private key when generate a CSR. You submit the CSR code to the certificate authority and keep private key in a safe place. &lt;/p&gt;

&lt;p&gt;As for the big three public cloud providers, they have instructions for exporting certificates: &lt;a href="https://docs.aws.amazon.com/acm/latest/userguide/export-private.html" rel="noopener noreferrer"&gt;AWS CM&lt;/a&gt;, &lt;a href="https://cloud.google.com/sdk/gcloud/reference/privateca/certificates" rel="noopener noreferrer"&gt;GCP CAS&lt;/a&gt;, &lt;a href="https://learn.microsoft.com/en-us/azure/key-vault/certificates/how-to-export-certificate?tabs=azure-cli" rel="noopener noreferrer"&gt;Azure KV&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It is necessary to make the SSL certificate a Kubernetes secret. It will subsequently be directed to the &lt;code&gt;tls&lt;/code&gt; block for &lt;code&gt;Ingress&lt;/code&gt; resources.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;And yes, keep the private key (&lt;code&gt;server.key&lt;/code&gt;)!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's use the &lt;code&gt;server.crt&lt;/code&gt; and &lt;code&gt;server.key&lt;/code&gt; files to construct a Kubernetes secret of &lt;code&gt;tls&lt;/code&gt; type (SSL certificates). In the &lt;code&gt;trial&lt;/code&gt; namespace, where the &lt;code&gt;hello-app&lt;/code&gt; deployment is located, we are creating the secret.&lt;/p&gt;

&lt;p&gt;Run the &lt;code&gt;kubectl&lt;/code&gt; command listed below from the directory where your server is located. Supply the &lt;em&gt;absolute path&lt;/em&gt; to the files or the &lt;code&gt;.crt&lt;/code&gt; and &lt;code&gt;.key&lt;/code&gt; files. The name &lt;code&gt;hello-app-tls&lt;/code&gt; is made up.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create secret tls hello-app-tls \
    --namespace trial \
    --key server.key \
    --cert server.crt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The comparable YAML file, where you must include the contents of the &lt;code&gt;.crt&lt;/code&gt; and &lt;code&gt;.key&lt;/code&gt; files, is provided below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-app-tls&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;trial&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes.io/tls&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;server.crt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
       &lt;span class="s"&gt;&amp;lt;crt contents here&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;server.key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
       &lt;span class="s"&gt;&amp;lt;private key contents here&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A Kubernetes ingress is &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noopener noreferrer"&gt;a set of rules&lt;/a&gt; that can be configured to give services externally reachable URLs. Based on this understanding, to turn on secure connection, we should add &lt;code&gt;tls&lt;/code&gt; block to &lt;code&gt;Ingress&lt;/code&gt; object.  So, in the &lt;code&gt;trial&lt;/code&gt; namespace, we create the sample ingress TLS-capable resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-app-ingress&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;trial&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ingressClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;app.hosting.cloudprovider.com&lt;/span&gt;
    &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-app-tls&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;app.hosting.cloudprovider.com"&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/"&lt;/span&gt;
          &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-service&lt;/span&gt;
              &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;⚠️ Replace &lt;code&gt;app.hosting.cloudprovider.com&lt;/code&gt; to your actual hostname. The &lt;code&gt;host(s)&lt;/code&gt; should be the same in both the &lt;code&gt;rules&lt;/code&gt; and &lt;code&gt;tls&lt;/code&gt; blocks in the &lt;code&gt;Ingress&lt;/code&gt; manifest. In other words, they must match.&lt;/p&gt;

&lt;p&gt;In case of using NGINX ingress, you can add the supported annotation by the ingress controller you are using if you want a &lt;em&gt;strict&lt;/em&gt; SSL. For instance, you can use the &lt;a href="https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md" rel="noopener noreferrer"&gt;annotation&lt;/a&gt; &lt;code&gt;nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"&lt;/code&gt; in the Nginx ingress controller to permit SSL traffic up until the application.&lt;/p&gt;

&lt;h3&gt;
  
  
  The way to make sure
&lt;/h3&gt;

&lt;p&gt;Let's check with &lt;code&gt;curl https://app.hosting.cloudprovider.com -kv&lt;/code&gt;, is the connection to the app secure now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: CN=app.hosting.cloudprovider.com
*  start date: Oct 6 15:35:07 2022 GMT
*  expire date: Oct 6 15:35:07 2023 GMT
*  issuer: CN=Go Daddy Secure Certificate Authority - G2,
              OU=http://certs.godaddy.com/repository/,
              O="GoDaddy.com, Inc.",L=Scottsdale,ST=Arizona,C=US
*  SSL certificate verify ok.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔒 If the certificate is valid, then the browser will not swear and there will be no frightening warnings either. Voilà, the connection to our app is secure!&lt;/p&gt;

&lt;p&gt;Okay, we've covered the situations for the first and second paths. The next step is pathfinding the third path involving Let's Encrypt certificate. &lt;/p&gt;

&lt;h2&gt;
  
  
  Estne vita vere brevis?
&lt;/h2&gt;

&lt;p&gt;Sed vita est cum dignitate vivendum. As the author noted above, the life of a certificate from a let's encrypt is &lt;a href="https://letsencrypt.org/2015/11/09/why-90-days.html" rel="noopener noreferrer"&gt;short&lt;/a&gt;. You have to pay for insolence. Accordingly, some solution is required that would automate the re-issuance of short-lived certificates, right? And such a solution exists, it is &lt;code&gt;cert-manager&lt;/code&gt;! It streamlines the process of getting, renewing, and using certificates by adding certificates and certificate issuers as &lt;em&gt;resource types&lt;/em&gt; in Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;It can generate certificates from a number of supported sources, including Let's Encrypt.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2w3rx680y8nvz18b9ms8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2w3rx680y8nvz18b9ms8.png" alt=" " width="747" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Furthermore, it will check that certificates are current and valid and make an attempt to &lt;strong&gt;renew&lt;/strong&gt; them for a specified period &lt;strong&gt;before&lt;/strong&gt; they &lt;strong&gt;expire&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install cert-manager on Kubernetes
&lt;/h3&gt;

&lt;p&gt;According to the official &lt;code&gt;cert-manager&lt;/code&gt; documentation, you can install it by using &lt;a href="https://cert-manager.io/docs/installation/kubectl/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt; or by the provided &lt;a href="https://cert-manager.io/docs/installation/helm/" rel="noopener noreferrer"&gt;helm&lt;/a&gt; chart.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create a dedicated Kubernetes namespace for cert-manager
kubectl create namespace cert-manager

# Add official cert-manager repository to helm CLI
helm repo add jetstack https://charts.jetstack.io

# Update Helm repository cache (think of apt update)
helm repo update

# Install cert-manager on Kubernetes
## cert-manager relies on several Custom Resource Definitions (CRDs)
helm install certmgr jetstack/cert-manager \
    --set installCRDs=true \
    --version v1.9.1 \
    --namespace cert-manager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;Issuer&lt;/code&gt; is responsible for issuing certificates. It is the signing authority and based on its configuration. The issuer knows how certificate requests are handled.  &lt;/p&gt;

&lt;p&gt;Cert-manager also creates several objects using different specifications such as &lt;code&gt;CertificateRequest&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;A &lt;code&gt;Certificate&lt;/code&gt; resource is a readable representation of a certificate request. Certificate resources are linked to an &lt;code&gt;Issuer&lt;/code&gt; who is responsible for requesting and renewing the certificate. &lt;/p&gt;

&lt;p&gt;To determine &lt;em&gt;if&lt;/em&gt; a certificate &lt;em&gt;needs to be re-issued&lt;/em&gt;, &lt;code&gt;cert-manager&lt;/code&gt; looks at the the &lt;code&gt;spec&lt;/code&gt; of &lt;code&gt;Certificate&lt;/code&gt; resource and latest &lt;code&gt;CertificateRequests&lt;/code&gt; as well as the data in &lt;code&gt;Secret&lt;/code&gt; containing the certificate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Let's Encrypt: staging or production server?
&lt;/h3&gt;

&lt;p&gt;An &lt;code&gt;Issuer&lt;/code&gt; is a custom resource (CRD) which tells &lt;code&gt;cert-manager&lt;/code&gt; how to sign a &lt;code&gt;Certificate&lt;/code&gt;. Following &lt;a href="https://cert-manager.io/docs/tutorials/getting-started-with-cert-manager-on-google-kubernetes-engine-using-lets-encrypt-for-ingress-ssl/" rel="noopener noreferrer"&gt;this howto (section 7)&lt;/a&gt; the &lt;code&gt;Issuer&lt;/code&gt; will be configured to connect to the Let's Encrypt staging server, which allows us to test everything without using up your Let's Encrypt &lt;a href="https://letsencrypt.org/docs/rate-limits/" rel="noopener noreferrer"&gt;certificate quota&lt;/a&gt; for the domain name. &lt;/p&gt;

&lt;p&gt;After debugging, you can safely issue a certificate by using LE's production server.&lt;/p&gt;

&lt;p&gt;A video describing &lt;code&gt;cert-manager&lt;/code&gt; YAML syntax and recommended by the author of this article is &lt;a href="https://youtu.be/7m4_kZOObzw" rel="noopener noreferrer"&gt;📽️ Anton Putra's good one&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;SSL certificate acquisition was made simple by Let's Encrypt's reputation as a reliable certificate authority. Together with &lt;code&gt;cert-manager&lt;/code&gt; tool, ops can quickly and easily assure correct transport encryption and interoperability with already-existing parts like NGINX Ingress. In addition to the example mentioned above, &lt;code&gt;cert-manager&lt;/code&gt; can help with trickier situations like those involving wildcard SSL certificates.&lt;/p&gt;

&lt;p&gt;If you're interested in using letsencrypt outside of a kubernetes cluster, take a look at &lt;a href="https://github.com/caddyserver/caddy" rel="noopener noreferrer"&gt;Caddy&lt;/a&gt;, a 43k ⭐ open source web server, and also at Certbot, a 29k ⭐ ACME client which is open source, too.&lt;/p&gt;

&lt;p&gt;Ever tried using &lt;code&gt;wireshark&lt;/code&gt; to monitor web traffic? Follow &lt;a href="https://www.comparitech.com/net-admin/decrypt-ssl-with-wireshark/" rel="noopener noreferrer"&gt;Aaron Phillips&lt;/a&gt; from Comparitech to learn how.&lt;/p&gt;

&lt;p&gt;Safe connections to you!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>ssl</category>
      <category>api</category>
      <category>crd</category>
    </item>
    <item>
      <title>Getting Started with Buffalo</title>
      <dc:creator>Ant(on) Weiss</dc:creator>
      <pubDate>Tue, 13 Sep 2022 15:02:23 +0000</pubDate>
      <link>https://forem.com/otomato_io/getting-started-with-buffalo-1cc0</link>
      <guid>https://forem.com/otomato_io/getting-started-with-buffalo-1cc0</guid>
      <description>&lt;h2&gt;
  
  
  Intro - Rapid Software Development in the Modern World
&lt;/h2&gt;

&lt;p&gt;(Feel free to skip to the tutorial)&lt;/p&gt;

&lt;p&gt;This post is the first in a planned series about rapid software development in the modern world.&lt;/p&gt;

&lt;p&gt;Rapid developer onboarding is something we've been dealing a lot with at Otomato software. But what really got us started on diving deeper into this -  was my conversation with Elad Meidar a few months back.&lt;/p&gt;

&lt;p&gt;Elad was talking to me about how non-trivial it has become to choose a stack in today's world. And let's say you've chosen your tools - only the most experienced developers really know how to set up CI, testing, deployment, instrumentation, monitoring, security, etc. correctly. A lot of Ops knowledge is involved in even getting things initially running. &lt;/p&gt;

&lt;p&gt;Our current goal is to take that knowledge that we've accumulated and make it available as a service. And while we're doing that - we're exploring the tooling that's currently available.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter Buffalo
&lt;/h2&gt;

&lt;p&gt;Buffalo is a framework, (or a tool) for rapid web development in Go. Cloud Native DevOps folks (and that's what we are at &lt;a href="https://otomato.io" rel="noopener noreferrer"&gt;Otomato&lt;/a&gt;) have a soft spot for Golang and that's why I'm starting this series with Buffalo.&lt;/p&gt;

&lt;p&gt;The official getting started section of Buffalo documentation is great but as I ran through it I noticed it lacks some operational details that I'm planning on exposing. Again there's Ops knowledge lurking in the dark!&lt;/p&gt;

&lt;h2&gt;
  
  
  Installations
&lt;/h2&gt;

&lt;p&gt;Quite naturally one would need to install Go. &lt;br&gt;
On a Mac:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

brew &lt;span class="nb"&gt;install &lt;/span&gt;golang


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;On Ubuntu/Debian:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;golang


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Note: on older systems (such as Ubuntu 20.04) you'll get a very old version of Go (1.13) by default when installing with &lt;code&gt;apt&lt;/code&gt;. So instead - choose the download-n-extract option &lt;a href="https://go.dev/doc/install" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For additional installation options go &lt;a href="https://go.dev/doc/install" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Do you do frontend?
&lt;/h3&gt;

&lt;p&gt;Buffalo can generate both pure backend API services and  fully-fledged webapps with frontend matter included. The frontend is in Javascript - so if you want that - you'll also need Node and either yarn or npm (the default).&lt;br&gt;
On a Mac:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

brew &lt;span class="nb"&gt;install &lt;/span&gt;nodejs


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;On Ubuntu/Debian:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;nodejs npm


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Do you want containers?
&lt;/h3&gt;

&lt;p&gt;Buffalo makes quite a few educated assumptions when generating your project. One of them is that you'll want to wrap your app in a container. You can, of course opt out, but why? So if you're going with the flow and enjoying the benefits of containerization - you probably already have Docker installed. If not - please &lt;a href="https://docs.docker.com/get-docker/" rel="noopener noreferrer"&gt;do so now&lt;/a&gt; - we'll need it further along the tutorial.&lt;/p&gt;
&lt;h3&gt;
  
  
  Finally - bring in the Buffalo
&lt;/h3&gt;

&lt;p&gt;On a Mac:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

brew &lt;span class="nb"&gt;install &lt;/span&gt;gobuffalo/tap/buffalo


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;On Linux;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

wget https://github.com/gobuffalo/cli/releases/download/v0.18.8/buffalo_0.18.8_Linux_x86_64.tar.gz
&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xvzf&lt;/span&gt; buffalo_0.18.8_Linux_x86_64.tar.gz
&lt;span class="nb"&gt;sudo mv &lt;/span&gt;buffalo /usr/local/bin/buffalo


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Create a project
&lt;/h2&gt;

&lt;p&gt;Buffalo has a project scaffolding feature that allows us to generate a new app complete with: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a local git repository&lt;/li&gt;
&lt;li&gt;a backend api&lt;/li&gt;
&lt;li&gt;a db integration&lt;/li&gt;
&lt;li&gt;a frontend&lt;/li&gt;
&lt;li&gt;a Dockerfile&lt;/li&gt;
&lt;li&gt;a CI pipeline.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's create a webapp called &lt;code&gt;testr&lt;/code&gt;. It will be used to manage test assignments for new and existing trainees. (Did I mention we do technical training at Otomato too?) &lt;/p&gt;

&lt;p&gt;The command to create a new project is &lt;code&gt;buffalo new&lt;/code&gt;.&lt;br&gt;
The default DB backend used by Buffalo is PostgreSQL.&lt;br&gt;
We will be using Github for SCM, so we'll choose Github Actions as our CI provider.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

buffalo new testr &lt;span class="nt"&gt;--ci-provider&lt;/span&gt; github


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After buffalo shows us what it's bringing in and generating (quite a bunch of stuff really) it will say:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

Initialized empty Git repository &lt;span class="k"&gt;in&lt;/span&gt; /Users/username/git/testr/.git/
DEBU[2022-08-28T23:37:54+03:00] Exec: git add &lt;span class="nb"&gt;.&lt;/span&gt;
DEBU[2022-08-28T23:37:54+03:00] Exec: git commit &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; Initial Commit
INFO[2022-08-28T23:37:54+03:00] Congratulations! Your application, testr, has been successfully generated!
INFO[2022-08-28T23:37:54+03:00] You can find your new application at: /Users/antweiss/git/testr
INFO[2022-08-28T23:37:54+03:00] Please &lt;span class="nb"&gt;read &lt;/span&gt;the README.md file &lt;span class="k"&gt;in &lt;/span&gt;your new application &lt;span class="k"&gt;for &lt;/span&gt;next steps on running your application.


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;So we 'll do just what it tells us to:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cd &lt;/span&gt;testr
git add &lt;span class="nb"&gt;.&lt;/span&gt;
git commit &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Initial Commit"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Set up the DB
&lt;/h2&gt;

&lt;p&gt;Before we can actually start developing our code there's &lt;br&gt;
a need to spin up a database. We could, of course, use a managed DB but it would probably cost us a few bucks. So for local development it makes much more sense to run the DB in a container.&lt;/p&gt;

&lt;p&gt;Let's run PostgreSQL (the default Buffalo DB backend):&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;--name&lt;/span&gt; buffalo-postgres &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;postgres &lt;span class="nt"&gt;-p&lt;/span&gt; 5432:5432 &lt;span class="nt"&gt;-d&lt;/span&gt; postgres


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Note: we're running PostgreSQL with a very naive password here , which is fine for local development but not fit for anything production-like.&lt;br&gt;
We're also exposing it on &lt;code&gt;localhost:5432&lt;/code&gt; - which is where a Buffalo app is configured to look for it by default.&lt;/p&gt;

&lt;p&gt;These configurations are defined in a buffalo-generated file &lt;code&gt;database.yml&lt;/code&gt; which we'll use shortly.&lt;/p&gt;

&lt;p&gt;Running the DB container isn't enough. We also need to create a database for our app.&lt;/p&gt;

&lt;p&gt;This can be done by entering the container and running good old SQL commands. But Buffalo creators recommend the use of Soda - a small and useful CLI utility that makes managing DBs easier. &lt;/p&gt;

&lt;p&gt;Install soda:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

go &lt;span class="nb"&gt;install &lt;/span&gt;github.com/gobuffalo/pop/v6/soda@latest


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And create a DB:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

soda create &lt;span class="nt"&gt;-a&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Soda creates all the databases configured in the file &lt;code&gt;database.yaml&lt;/code&gt; that Buffalo has generated for us.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;: By default Buffalo uses its own ORM library called &lt;code&gt;pop&lt;/code&gt; for DB integrations. Pop provides a wrapper for &lt;code&gt;soda&lt;/code&gt; - so we can also run &lt;code&gt;soda&lt;/code&gt; commands through buffalo aliases: &lt;code&gt;buffalo pop create -a&lt;/code&gt; or &lt;code&gt;buffalo db create -a&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Buffalo delivers even more useful DB stuff with the help of &lt;code&gt;pop&lt;/code&gt; - like model generation. But we'll cover that in the follow-up post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start development
&lt;/h2&gt;

&lt;p&gt;Buffalo provides us with a &lt;code&gt;buffalo dev&lt;/code&gt; command which allows running our app with live reloading - i.e restarting the application server each time we change the code.&lt;/p&gt;

&lt;p&gt;Let's run!&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

buffalo dev


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we can visit &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt; in browser and see our app running!&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuweb1g55g2ok5ve4ntoj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuweb1g55g2ok5ve4ntoj.png" alt="App running"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And - we're live!&lt;/p&gt;

&lt;p&gt;The web UI we see is generated from &lt;code&gt;testr/templates/home/index.plush.html&lt;/code&gt; using the &lt;a href="https://github.com/gobuffalo/plush" rel="noopener noreferrer"&gt;plush&lt;/a&gt; templating engine. Also to be covered in a separate post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's Have Some CI
&lt;/h2&gt;

&lt;p&gt;As the final step of this walkthrough - let's push our code to Github and verify the generated CI pipeline works.&lt;/p&gt;

&lt;h3&gt;
  
  
  Generate a new Github repo
&lt;/h3&gt;

&lt;p&gt;I heartily recommend using &lt;a href="https://cli.github.com/" rel="noopener noreferrer"&gt;Github's &lt;code&gt;gh&lt;/code&gt; cli tool&lt;/a&gt;:&lt;br&gt;
From the &lt;code&gt;testr&lt;/code&gt; directory run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

gh repo create &lt;span class="nt"&gt;--public&lt;/span&gt; &amp;lt;your-user-or-org&amp;gt;/buffalo-testr &lt;span class="nt"&gt;--push&lt;/span&gt;  &lt;span class="nt"&gt;--source&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will create the repo and immediately push the code to it, which in turn starts the workflow defined in &lt;code&gt;.github/workflows/test.yml&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;postgres&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres:9.6-alpine&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;testr_test&lt;/span&gt;
          &lt;span class="na"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
        &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;-&lt;/span&gt;
          &lt;span class="s"&gt;--health-cmd pg_isready&lt;/span&gt;
          &lt;span class="s"&gt;--health-interval 10s&lt;/span&gt;
          &lt;span class="s"&gt;--health-timeout 5s&lt;/span&gt;
          &lt;span class="s"&gt;--health-retries 5&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;5432:5432&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v3&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-go@v3&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;go-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;~1.18&lt;/span&gt;
          &lt;span class="na"&gt;cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;setup&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;go install github.com/gobuffalo/cli/cmd/buffalo@latest&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;TEST_DATABASE_URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres://postgres:postgres@localhost:5432/testr_test?sslmode=disable&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;buffalo test&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can see that this workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;spins up a PostgreSQL service container&lt;/li&gt;
&lt;li&gt;installs buffalo&lt;/li&gt;
&lt;li&gt;runs &lt;code&gt;buffalo test&lt;/code&gt; - which in turn creates the DB in the container and runs some tests:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="o"&gt;[&lt;/span&gt;POP] 2022/09/12 15:24:04 info - dropped database testr_test
&lt;span class="o"&gt;[&lt;/span&gt;POP] 2022/09/12 15:24:05 info - created database testr_test
pg_dump: error: connection to server at &lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt;, port 5432 failed: FATAL:  database &lt;span class="s2"&gt;"testr_development"&lt;/span&gt; does not exist
&lt;span class="o"&gt;[&lt;/span&gt;POP] 2022/09/12 15:24:05 info - Migrations already up to &lt;span class="nb"&gt;date&lt;/span&gt;, nothing to apply
&lt;span class="o"&gt;[&lt;/span&gt;POP] 2022/09/12 15:24:05 info - 0.0102 seconds
&lt;span class="o"&gt;[&lt;/span&gt;POP] 2022/09/12 15:24:05 warn - Migrator: unable to dump schema: open migrations/schema.sql: no such file or directory
&lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"2022-09-12T15:24:06Z"&lt;/span&gt; &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"go test -p 1 -tags development testr/actions testr/cmd/app testr/grifts testr/locales testr/models testr/public testr/templates"&lt;/span&gt;
go: downloading github.com/gobuffalo/suite/v4 v4.0.3
go: downloading github.com/gobuffalo/httptest v1.5.1
go: downloading github.com/stretchr/testify v1.8.0
go: downloading github.com/davecgh/go-spew v1.1.1
go: downloading github.com/pmezard/go-difflib v1.0.0
ok      testr/actions   0.019s
?       testr/cmd/app   &lt;span class="o"&gt;[&lt;/span&gt;no &lt;span class="nb"&gt;test &lt;/span&gt;files]
?       testr/grifts    &lt;span class="o"&gt;[&lt;/span&gt;no &lt;span class="nb"&gt;test &lt;/span&gt;files]
?       testr/locales   &lt;span class="o"&gt;[&lt;/span&gt;no &lt;span class="nb"&gt;test &lt;/span&gt;files]
ok      testr/models    0.012s
?       testr/public    &lt;span class="o"&gt;[&lt;/span&gt;no &lt;span class="nb"&gt;test &lt;/span&gt;files]
?       testr/templates &lt;span class="o"&gt;[&lt;/span&gt;no &lt;span class="nb"&gt;test &lt;/span&gt;files]


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Voila! The workflow works. It doesn't create a Docker image for us (so no artifacts) - but it does run some basic integration testing.&lt;/p&gt;

&lt;p&gt;Tests for Buffalo apps is another topic for yet another post.&lt;/p&gt;

&lt;h2&gt;
  
  
  To Sum Things Up
&lt;/h2&gt;

&lt;p&gt;Buffalo is a well thought-out rapid development framework for full-stack apps or standalone backend APIs. It does pack quite a lot to get us started, but it still leaves the developer in the playground - without any clear guidelines regarding where and how to deploy their code for production.&lt;/p&gt;

&lt;p&gt;And what are you using for rapid bootstrapping of new services? &lt;br&gt;
What other rapid development frameworks would you like us to cover? &lt;/p&gt;

&lt;p&gt;Let us know in comments - our research is only starting!&lt;/p&gt;

</description>
      <category>rapiddevelopment</category>
      <category>go</category>
      <category>howto</category>
      <category>fullstack</category>
    </item>
    <item>
      <title>Ops Deliver Confidence</title>
      <dc:creator>Ant(on) Weiss</dc:creator>
      <pubDate>Mon, 12 Sep 2022 08:37:40 +0000</pubDate>
      <link>https://forem.com/otomato_io/ops-deliver-confidence-4p5e</link>
      <guid>https://forem.com/otomato_io/ops-deliver-confidence-4p5e</guid>
      <description>&lt;p&gt;What is confidence? It's a belief that whatever happens, whatever comes our way - we'll be able to handle it. Either by ourselves or with someone's help.&lt;/p&gt;

&lt;p&gt;The world is far from predictable. We never really know what will happen the next moment. Believing otherwise would mean lying to ourselves. Because we don't have control of the outside world. The only thing a person can control is their body and mind. And even that - only to some extent.&lt;/p&gt;

&lt;p&gt;The best service providers always sell confidence. Confidence that the room will be clean, confidence that the luggage will get picked up, confidence that the mail will be delivered.&lt;/p&gt;

&lt;p&gt;It's the same with Ops service:&lt;/p&gt;

&lt;p&gt;We cannot really prevent incidents from happening. Rarely can we promise 5 nines of uptime (it can be done but it's awfully expensive). What we can offer our customers is confidence. &lt;/p&gt;

&lt;p&gt;Confidence that whatever incident occurs - it will be taken care of. Confidence that if we can automate a process - we'll put it on our backlog and prioritize it correctly.&lt;br&gt;
That whenever we discover system's new failure mode - we'll create an alert and a remediation for it.&lt;/p&gt;

&lt;p&gt;And we need to always remember to convey that confidence. In the way we document our processes. In how we publish our post-mortem analyses. In how we treat customers' tickets and requests. In the way we communicate our plans and yes - even our doubts.&lt;/p&gt;

&lt;p&gt;Because when you convey confidence - you share it. You give it to your customer - so they can check one more worry off their list. And like us all - they have a lo-o-o-ng list.&lt;/p&gt;

&lt;p&gt;That's how we strive to do business at &lt;a href="https://otomato.io"&gt;Otomato&lt;/a&gt; - with maximum transparency, integrity and confidence. So our customers can rest assured their software will get delivered and keep running even when nobody's watching. And if anything goes down - we'll bring it back up.&lt;/p&gt;

&lt;p&gt;We deliver confidence.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>training</category>
      <category>service</category>
    </item>
    <item>
      <title>Admission Controllers in Action: Datree's Approach</title>
      <dc:creator>Roman Belshevitz</dc:creator>
      <pubDate>Sun, 11 Sep 2022 09:19:45 +0000</pubDate>
      <link>https://forem.com/otomato_io/admission-controllers-in-action-datrees-approach-143d</link>
      <guid>https://forem.com/otomato_io/admission-controllers-in-action-datrees-approach-143d</guid>
      <description>&lt;p&gt;In the &lt;a href="https://dev.to/otomato_io/responsible-approach-to-communicating-with-the-api-server-admission-controllers-3b49"&gt;eighth part&lt;/a&gt;, the author talked about admission controllers. In this, the ninth, we will see how ACs can be used for practical purposes.&lt;/p&gt;

&lt;p&gt;At the same time, it may be considered that this is the second part of the &lt;a href="https://dev.to/otomato_io/datree-a-tool-which-really-shifts-your-cluster-security-even-more-left-1g20"&gt;review&lt;/a&gt;, so both of the parts will be marked with the appropriate &lt;code&gt;#datree&lt;/code&gt; tag.&lt;/p&gt;

&lt;h2&gt;
  
  
  The originality of Datree's approach
&lt;/h2&gt;

&lt;p&gt;In brief, &lt;a href="https://github.com/datreeio/admission-webhook-datree" rel="noopener noreferrer"&gt;Datree's integration&lt;/a&gt; enables you to check your resources against the defined policy &lt;strong&gt;a moment before&lt;/strong&gt; you put them into a cluster... by leveraging an admission webhook! 😎 &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/datreeio/admission-webhook-datree/blob/main/kube/validating-webhook-configuration.yaml" rel="noopener noreferrer"&gt;The webhook&lt;/a&gt; implemented with &lt;code&gt;ValidatingWebhookConfiguration&lt;/code&gt; will detect &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#webhook-configuration" rel="noopener noreferrer"&gt;operations&lt;/a&gt; such as &lt;code&gt;CREATE&lt;/code&gt;, &lt;code&gt;UPDATE&lt;/code&gt; or &lt;code&gt;DELETE&lt;/code&gt;, and it will start a policy check against the configs related to each operation. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvvkt02z3978vwpma40z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvvkt02z3978vwpma40z.png" alt=" " width="427" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If any configuration errors are discovered, the webhook will refuse the action and show a thorough output with guidance on how to fix each error.&lt;/p&gt;

&lt;p&gt;Every cluster operation that is tied up once the webhook is installed will cause a Datree policy check. If there are no configuration errors, the resource will get a green light🚦 to be applied or updated. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🔬 Datree is functioning well in a full-scale cluster and also in a &lt;code&gt;k3s/k3d&lt;/code&gt;-baked one! It make a debugging process more suitable even for local development. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Let's go step by step
&lt;/h2&gt;

&lt;p&gt;Following the Software-as-a-Service paradigm, Datree provides their users access to the misconfigurations' database and to the personal workspace at their website, where all the checks initiated by the user become aggregated. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqeq74rpsh6peeouf91yo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqeq74rpsh6peeouf91yo.png" alt=" " width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This cheeky astronaut design will brighten up your day.&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 Step 1. Access token
&lt;/h3&gt;

&lt;p&gt;They [want to] know everything about you! Well, relax, it is a joke. Sign up or &lt;a href="https://app.datree.io/login" rel="noopener noreferrer"&gt;log in&lt;/a&gt;, then grab your token to access &lt;code&gt;datree&lt;/code&gt; programmatically. API access tokens are widespread in 2022, aren't they? 🔏&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgx48pcy2urgxpngqbhzb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgx48pcy2urgxpngqbhzb.png" alt=" " width="711" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💰 There is only one token available when using the so-called Free Plan, enough for evaluation purposes (up to 4 Kubernetes nodes are supported; service access logs will be stored for two weeks).&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 Step 2. Set up your CLI environment
&lt;/h3&gt;

&lt;p&gt;The following binaries must be installed on the machine: &lt;code&gt;kubectl&lt;/code&gt;, &lt;code&gt;openssl&lt;/code&gt; (required for creating a certificate authority, CA) and &lt;code&gt;curl&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Assume everything is in place. Let's run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ DATREE_TOKEN=[your-token] bash &amp;lt;(curl https://get.datree.io/admission-webhook)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What you should see and what should happen to your cluster (yes, API requests are additionally encrypted):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;🔑 Generating TLS keys...
Generating a RSA private key
Signature ok
subject=CN = webhook-server.datree.svc
Getting CA Private Key
/home/roman
🔗 Creating webhook secret tls...
secret "webhook-server-tls" deleted
secret/webhook-server-tls created
🔗 Creating core resources...
serviceaccount/webhook-server-datree created
clusterrolebinding.rbac.authorization.k8s.io/rolebinding:webhook-server-datree created
clusterrole.rbac.authorization.k8s.io/webhook-server-datree created
deployment.apps/webhook-server configured
service/webhook-server created
deployment "webhook-server" successfully rolled out
🔗 Creating validation webhook resource...
validatingwebhookconfiguration.admissionregistration.k8s.io/webhook-datree configured
🎉 DONE! The webhook server is now deployed and configured
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🎯 Step 3. Protect your access token
&lt;/h3&gt;

&lt;p&gt;Because your token is private and you don't want to store it in your repository, we advise setting or changing it using a different &lt;code&gt;kubectl patch&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl patch deployment webhook-server -n datree -p '
spec:
  template:
    spec:
      containers:
        - name: server
          env:
            - name: DATREE_TOKEN
              value: "&amp;lt;your-token&amp;gt;"'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🎯 Step 4. Deploy something: magic will work for you
&lt;/h3&gt;

&lt;p&gt;The author does not want to reinvent the wheel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-deployment&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.14.2&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's focus and look at the &lt;code&gt;kubectl apply -f nginx-deployment.yaml&lt;/code&gt; routine's result (the deployment has been &lt;strong&gt;denied&lt;/strong&gt; by AC):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f nginx-deployment.yaml
Error from server: error when creating "nginx-deployment.yaml": admission webhook "webhook-server.datree.svc" denied the request: 
webhook-nginx-deployment-Deployment.tmp.yaml

[V] YAML validation
[V] Kubernetes schema validation

[X] Policy check

❌  Ensure each container has a configured CPU limit  [1 occurrence]
    - metadata.name: nginx-deployment (kind: Deployment)
💡  Missing property object `limits.cpu` - value should be within the accepted boundaries recommended by the organization

❌  Ensure each container has a configured CPU request  [1 occurrence]
    - metadata.name: nginx-deployment (kind: Deployment)
💡  Missing property object `requests.cpu` - value should be within the accepted boundaries recommended by the organization

❌  Ensure each container has a configured liveness probe  [1 occurrence]
    - metadata.name: nginx-deployment (kind: Deployment)
💡  Missing property object `livenessProbe` - add a properly configured livenessProbe to catch possible deadlocks

❌  Ensure each container has a configured memory limit  [1 occurrence]
    - metadata.name: nginx-deployment (kind: Deployment)
💡  Missing property object `limits.memory` - value should be within the accepted boundaries recommended by the organization

❌  Ensure each container has a configured memory request  [1 occurrence]
    - metadata.name: nginx-deployment (kind: Deployment)
💡  Missing property object `requests.memory` - value should be within the accepted boundaries recommended by the organization

❌  Ensure each container has a configured readiness probe  [1 occurrence]
    - metadata.name: nginx-deployment (kind: Deployment)
💡  Missing property object `readinessProbe` - add a properly configured readinessProbe to notify kubelet your Pods are ready for traffic

❌  Prevent workload from using the default namespace  [1 occurrence]
    - metadata.name: nginx-deployment (kind: Deployment)
💡  Incorrect value for key `namespace` - use an explicit namespace instead of the default one (`default`)


(Summary)

- Passing YAML validation: 1/1

- Passing Kubernetes (v1.21.5) schema validation: 1/1

- Passing policy check: 0/1

+-----------------------------------+-----------------------+
| Enabled rules in policy "Default" | 21                    |
| Configs tested against policy     | 1                     |
| Total rules evaluated             | 21                    |
| Total rules skipped               | 0                     |
| Total rules failed                | 7                     |
| Total rules passed                | 14                    |
| See all rules in policy           | https://app.datree.io |
+-----------------------------------+-----------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Opening the link, you'll be redirected to your personal workspace. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w70bgsoyktv1kyl8kxc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w70bgsoyktv1kyl8kxc.png" alt=" " width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 Step 5. Is such rigor necessary?
&lt;/h3&gt;

&lt;p&gt;You can audit &lt;a href="https://hub.datree.io/setup/centralized-policy" rel="noopener noreferrer"&gt;reactive policies&lt;/a&gt; and review invocation history. If checks are too strict, unset some of the policies. &lt;/p&gt;

&lt;p&gt;For example, not in every deployment you really need pre-configured container readiness probes [or CPU &amp;amp; memory limits].  &lt;/p&gt;

&lt;p&gt;Well, &lt;strong&gt;another tryout&lt;/strong&gt; will be on edited YAML (&lt;a href="https://k8syaml.com/" rel="noopener noreferrer"&gt;Octopus&lt;/a&gt; may be your fellow):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-deployment&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RollingUpdate&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.14.2&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
          &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100Mi&lt;/span&gt;
              &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100m&lt;/span&gt;
            &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;200Mi&lt;/span&gt;
              &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;1'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we really want to use the &lt;code&gt;default&lt;/code&gt; namespace and have no fears with it, let's disable &lt;code&gt;Prevent workload from using the default namespace&lt;/code&gt; policy in Datree web UI. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzygl4h25ucxjlxnp20je.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzygl4h25ucxjlxnp20je.png" alt=" " width="685" height="42"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also we may want to liberate us from &lt;code&gt;Ensure each container has a configured readiness probe&lt;/code&gt; and &lt;code&gt;Ensure each container has a configured liveness probe policies&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Et voilà! 🎭&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f nginx-deployment-advanced.yaml
deployment.apps/nginx-deployment created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now all checks are passed successfully and the admission controller have got us a green light🚦 and allowed the deployment!&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 Step 6. Hey, do not climb where it is not necessary
&lt;/h3&gt;

&lt;p&gt;If you want &lt;code&gt;datree&lt;/code&gt; to disregard a namespace, add the label &lt;code&gt;admission.datree/validate=skip&lt;/code&gt; to its configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl label namespaces default "admission.datree/validate=skip"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to wipe traces
&lt;/h2&gt;

&lt;p&gt;To delete the label and resume running the &lt;code&gt;datree&lt;/code&gt; webhook on the namespace again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl label namespaces default "admission.datree/validate-"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Uninstall the webhook
&lt;/h2&gt;

&lt;p&gt;Copy the following command and run it in your terminal to remove the webhook:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ bash &amp;lt;(curl https://get.datree.io/admission-webhook-uninstall)
validatingwebhookconfiguration.admissionregistration.k8s.io "webhook-datree" deleted
service "webhook-server" deleted
deployment.apps "webhook-server" deleted
secret "webhook-server-tls" deleted
clusterrolebinding.rbac.authorization.k8s.io "rolebinding:webhook-server-datree" deleted
serviceaccount "webhook-server-datree" deleted
clusterrole.rbac.authorization.k8s.io "webhook-server-datree" deleted
namespace/kube-system unlabeled
namespace "datree" deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summing up what was said
&lt;/h2&gt;

&lt;p&gt;As you could understand, the possibilities of Kubernetes API are quite wide. The author hopes that he not only prepared an overview of a useful solution, but also explained the theoretical aspects of its functionality.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>security</category>
      <category>datree</category>
    </item>
    <item>
      <title>Responsible Approach to Communicating With the API Server: Admission Controllers</title>
      <dc:creator>Roman Belshevitz</dc:creator>
      <pubDate>Fri, 02 Sep 2022 12:25:12 +0000</pubDate>
      <link>https://forem.com/otomato_io/responsible-approach-to-communicating-with-the-api-server-admission-controllers-3b49</link>
      <guid>https://forem.com/otomato_io/responsible-approach-to-communicating-with-the-api-server-admission-controllers-3b49</guid>
      <description>&lt;h2&gt;
  
  
  A bit of theory
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;RBAC&lt;/em&gt; and &lt;em&gt;Network policies&lt;/em&gt; are two fundamental security elements of Kubernetes that you probably already know about if you work with it. These mechanisms are helpful for enforcing fundamental guidelines regarding what operations different users or services within your cluster are permitted to carry out.&lt;/p&gt;

&lt;p&gt;However, there are situations when you require &lt;em&gt;more policy features&lt;/em&gt; or granularity than RBAC or network policies can provide. Alternatively, you might want to &lt;em&gt;run additional checks&lt;/em&gt; to verify a resource &lt;em&gt;before&lt;/em&gt; allowing it to join your cluster.&lt;/p&gt;

&lt;p&gt;Admission Controllers (ACs) allow you to add &lt;em&gt;additional options&lt;/em&gt; to the work of Kubernetes to change or &lt;em&gt;validate objects&lt;/em&gt; when making requests to the Kubernetes API. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkswoeuleimjfswqokxy2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkswoeuleimjfswqokxy2.png" alt=" " width="800" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🖼️ Pic source: Giant Swarm&lt;/p&gt;

&lt;p&gt;The image shows the various parts that make up the API component. The request initiates communication between the API and the admission controller. The authorization module determines whether the request issuer is permitted to carry out the operation after the request has been authenticated. The admittance magic kicks in once the request have been duly approved.&lt;/p&gt;

&lt;p&gt;If the controller rejects the request, then the entire request to the API server is rejected and an error is returned to the end user.&lt;/p&gt;

&lt;p&gt;To activate controllers discussed, you must specify the names of the controllers in the form of a list when creating or updating a cluster. After that, &lt;code&gt;kube-apiserver&lt;/code&gt; will be started or restarted with the &lt;code&gt;--enable-admission-plugins&lt;/code&gt; option and access controllers set.&lt;/p&gt;

&lt;p&gt;Passing a controller that is not available for the current version of Kubernetes will return an appropriate error.&lt;/p&gt;

&lt;h2&gt;
  
  
  What exactly can ACs be?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🔬 In a scope of implementation
&lt;/h3&gt;

&lt;p&gt;Admission controllers that are built into and made available by Kubernetes itself are known as &lt;strong&gt;static&lt;/strong&gt; admission controllers. Not every one of them is turned on by default. The cloud companies also grab some of them or restrict some of them for their own usage. You can enable and utilize them if you are the owner of your Kubernetes deployment. Some examples:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;LimitRanger&lt;/code&gt; &lt;em&gt;makes sure that any of the restrictions&lt;/em&gt; listed in the &lt;code&gt;LimitRange&lt;/code&gt; object in a namespace &lt;em&gt;are not broken&lt;/em&gt; by incoming requests. Use this admission controller to impose those restrictions if you are utilizing &lt;code&gt;LimitRange&lt;/code&gt; objects in your Kubernetes setup. Applying default resource requests to pods without any specifications is also possible with this AC.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;AlwaysPullImages&lt;/code&gt; &lt;em&gt;changes the image pull policy&lt;/em&gt; for every new Pod. This is useful, for example, in multi-tenant clusters to ensure that only those with the credentials to fetch private images can access them. Without this admission controller, after an image has been pulled to a node, any pod from any user can use it just by knowing the image's name without any authorization checks. This feature must be enabled in the cluster.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;NamespaceLifecycle&lt;/code&gt; &lt;em&gt;enforces that a namespace that is undergoing termination cannot have new objects&lt;/em&gt; created in it, and ensures that requests in a non-existent namespace are rejected.&lt;/p&gt;

&lt;p&gt;And there are &lt;strong&gt;dynamic&lt;/strong&gt; ones. See details below.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔬 In a scope of request processing
&lt;/h3&gt;

&lt;p&gt;There are &lt;em&gt;two types&lt;/em&gt; of dynamic admission controllers in Kubernetes. They work slightly differently. Saying shortly, one just &lt;em&gt;validates&lt;/em&gt; the requests, and the other &lt;em&gt;modifies&lt;/em&gt; it if it isn’t up to spec.&lt;/p&gt;

&lt;p&gt;⚙️ &lt;strong&gt;The first type&lt;/strong&gt; is the &lt;strong&gt;validating&lt;/strong&gt; admission controller &lt;code&gt;ValidatingAdmissionWebhook&lt;/code&gt;, which proxies the requests to the subscribed webhooks. The Kubernetes API registers the webhooks based on the resource type and the request method. Every webhook runs some logic to validate the incoming resource, and it replies with a verdict to the API. &lt;/p&gt;

&lt;p&gt;In case the validation webhook rejects the request, the Kubernetes API returns a failed HTTP response to the user. Otherwise, it continues with the next admission.&lt;/p&gt;

&lt;p&gt;⚙️ &lt;strong&gt;The second type&lt;/strong&gt; is a &lt;strong&gt;mutating&lt;/strong&gt; admission controller &lt;code&gt;MutatingAdmissionWebhook&lt;/code&gt;, which alters the resource that the user has submitted so that default values can be set, or the schema can be verified. The API can have mutation webhooks attached by cluster administrators so that they can execute them similarly to validation. &lt;/p&gt;

&lt;h2&gt;
  
  
  Hooks! Hooks are everywhere!
&lt;/h2&gt;

&lt;p&gt;Any resource type, including those that are pre-built like pods, jobs, or services, may be the primary resource type for a controller. The issue is that most built-in resources, if not all of them, already come with associated built-in controllers. In order to prevent having &lt;em&gt;many&lt;/em&gt; controllers update the status of a shared object, &lt;em&gt;custom controllers&lt;/em&gt; are frequently built for special resources. &lt;/p&gt;

&lt;p&gt;If resources are merely Kubernetes API endpoints, writing a controller for a resource is just a fancy &lt;strong&gt;way to bind a request handler to an API endpoint&lt;/strong&gt;! &lt;/p&gt;

&lt;p&gt;Conditional resource modification can be implemented using a so-called webhook, which is essentially an &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#request" rel="noopener noreferrer"&gt;API endpoint&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;It is possible &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/" rel="noopener noreferrer"&gt;to configure dynamically&lt;/a&gt; what resources are subject to what admission webhooks via &lt;code&gt;ValidatingWebhookConfiguration&lt;/code&gt; or &lt;code&gt;MutatingWebhookConfiguration&lt;/code&gt; kinds. &lt;/p&gt;

&lt;p&gt;Both are available in &lt;code&gt;admissionregistration.k8s.io/v1&lt;/code&gt; API version.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl api-versions | grep admiss
admissionregistration.k8s.io/v1
admissionregistration.k8s.io/v1beta1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How would I activate admission controllers?
&lt;/h2&gt;

&lt;p&gt;To use before changing cluster objects, the Kubernetes API server flag &lt;code&gt;--enable-admission-plugins&lt;/code&gt; accepts a comma-delimited list of AC plugins. For instance, the following command line activates the &lt;code&gt;LimitRanger&lt;/code&gt; and &lt;code&gt;NamespaceLifecycle&lt;/code&gt; admission control plugins:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;⚠️ Note: You may need to apply the parameters in different ways depending on how your Kubernetes cluster is installed and how the API server is launched. For instance, if Kubernetes is deployed using self-hosted Kubernetes, you may need to alter the manifest file for the API server &lt;a href="https://digitalis.io/blog/kubernetes/k3s-lightweight-kubernetes-made-ready-for-production-part-2/" rel="noopener noreferrer"&gt;and/or the &lt;code&gt;systemd&lt;/code&gt; Unit file&lt;/a&gt; if the API server is installed as a &lt;code&gt;systemd&lt;/code&gt; service.&lt;/p&gt;

&lt;p&gt;⚠️ Note: API kind &lt;code&gt;admissionregistration.k8s.io/v1beta1&lt;/code&gt; became deprecated in 1.22+ &lt;/p&gt;

&lt;h2&gt;
  
  
  Public cloud providers' implementation
&lt;/h2&gt;

&lt;p&gt;In this case, everything is already set up for you. &lt;/p&gt;

&lt;p&gt;To learn more about using dynamic admission controllers with Amazon EKS, see the &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html" rel="noopener noreferrer"&gt;Amazon EKS documentation&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/architecture/operator-guides/aks/aks-triage-controllers" rel="noopener noreferrer"&gt;Azure AKS Policy&lt;/a&gt;, Microsoft's implementation of OPA Gatekeeper, is another interesting thing. Involving AC webhooks,  if there are problems in the admission control pipeline, it can block numerous requests to the API server.&lt;/p&gt;

&lt;p&gt;The VMware Tanzu team followed a similar path in their &lt;a href="https://tanzu.vmware.com/developer/guides/platform-security-admission-control/" rel="noopener noreferrer"&gt;Tanzu Kubernetes Grid (TKG)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Of course, OPA Gatekeeper itself is a separate and extensive topic, so more on that another time.&lt;/p&gt;

&lt;p&gt;In the ninth article of the series, the author will talk about how smart people were able to translate the theory described above into a useful solution.&lt;/p&gt;

&lt;p&gt;Be careful and stay tuned!&lt;/p&gt;

&lt;p&gt;Many thanks to Leonid Sandler, Douglas Makey &lt;a class="mentioned-user" href="https://dev.to/douglasmakey"&gt;@douglasmakey&lt;/a&gt;  Mendez Molero, Luca 🐦 @LucaDiMaio11 Di Maio, Kristijan Mitevski and W.T. Chang!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>security</category>
      <category>api</category>
    </item>
    <item>
      <title>Virtual Kubernetes Clusters: What Are They Needed For?</title>
      <dc:creator>Roman Belshevitz</dc:creator>
      <pubDate>Mon, 29 Aug 2022 18:58:50 +0000</pubDate>
      <link>https://forem.com/otomato_io/virtual-kubernetes-clusters-what-are-they-needed-for-4fdd</link>
      <guid>https://forem.com/otomato_io/virtual-kubernetes-clusters-what-are-they-needed-for-4fdd</guid>
      <description>&lt;h2&gt;
  
  
  Developer Wishlist Never Ends
&lt;/h2&gt;

&lt;p&gt;Imagine you can&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;have &lt;strong&gt;many&lt;/strong&gt; virtual clusters &lt;strong&gt;within a single&lt;/strong&gt; cluster, and &lt;/li&gt;
&lt;li&gt;they are &lt;strong&gt;much cheaper&lt;/strong&gt; than the traditional Kubernetes clusters, and &lt;/li&gt;
&lt;li&gt;they require &lt;strong&gt;lower&lt;/strong&gt; management and maintenance &lt;strong&gt;efforts&lt;/strong&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sounds intriguing, eh? This makes v/clusters ideal for running experiments, continuous integration, and setting up &lt;strong&gt;sandbox&lt;/strong&gt; 🧪 environments.&lt;/p&gt;

&lt;p&gt;So, Loft Labs created such a solution written natively in Golang and made it an ~2k⭐ &lt;a href="https://github.com/loft-sh/vcluster" rel="noopener noreferrer"&gt;open source&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's under the hood?
&lt;/h2&gt;

&lt;p&gt;On top of other Kubernetes clusters, virtual clusters are fully functional Kubernetes clusters. Virtual clusters utilize the worker nodes and networking of the host cluster, as opposed to completely distinct "real" clusters. They schedule all workloads into a single namespace of the host cluster and have their own control plane. Virtual clusters divide a single physical cluster into several distinct ones, much like virtual machines do.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1k34razhtr4v2t1dp3n8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1k34razhtr4v2t1dp3n8.png" alt=" " width="800" height="296"&gt;&lt;/a&gt;&lt;br&gt;
🖼️ Right click, don't even think too long.&lt;/p&gt;

&lt;p&gt;Only the essential Kubernetes components - the API server, controller manager, storage backend (such as etcd, sqlite, mysql, etc.), and - optionally - a scheduler—make up the virtual cluster itself. In order to minimize virtual cluster overhead, &lt;code&gt;vcluster&lt;/code&gt; builds by default on &lt;code&gt;k3s&lt;/code&gt;, a fully functional, certified, lightweight Kubernetes distribution that compiles the Kubernetes components into a single binary and disables by default all unnecessary Kubernetes features, such as the pod scheduler or specific controllers.&lt;/p&gt;

&lt;p&gt;Other Kubernetes distributions, &lt;a href="https://www.vcluster.com/docs/operator/other-distributions" rel="noopener noreferrer"&gt;such k0s and vanilla k8s&lt;/a&gt;, are supported in addition to k3s. In addition to the control plane, the virtual cluster also includes a Kubernetes hypervisor that simulates networking and worker nodes. Between the virtual and host clusters, this component syncs a few key resources that are crucial for cluster functionality:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pods&lt;/strong&gt;: All the virtual cluster's started pods are rewritten before being launched in the virtual cluster's namespace in the host cluster. Environment variables, DNS, service account tokens, and other configurations are updated to point to the virtual cluster rather than the host cluster. In the pod, it appears that the virtual cluster rather than the host cluster is where the pod is started.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Services&lt;/strong&gt;: On the namespace of the virtual cluster in the host cluster, all services and endpoints are rewritten and generated. The service cluster IPs are shared by the host cluster and virtual cluster. This implies that there are no performance consequences when a service in the host cluster is accessed from within the virtual cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PersistentVolumeClaims&lt;/strong&gt;: In the event that persistent volume claims are generated in the virtual cluster, they will be modified and generated in the host cluster's namespace. The relevant persistent volume data will be synchronized back to the virtual cluster if they are bound in the host cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ConfigMaps &amp;amp; Secrets&lt;/strong&gt;: Only ConfigMaps and secrets mounted to pods within the virtual cluster will be synced to the host cluster; all other ConfigMaps and secrets will only be retained within the virtual cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Other Resources&lt;/strong&gt;: Deployments, StatefulSets, CRDs, service accounts, etc. do not sync with the host cluster; instead, they only reside in the virtual cluster.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Who lost the magic mirror?
&lt;/h2&gt;

&lt;p&gt;For each pod with the &lt;code&gt;spec.nodeName&lt;/code&gt; value it encounters inside the virtual cluster, &lt;code&gt;vcluster&lt;/code&gt; by default creates a &lt;em&gt;false&lt;/em&gt; node. Because &lt;code&gt;vcluster&lt;/code&gt; &lt;em&gt;does not&lt;/em&gt; by default &lt;em&gt;have RBAC permissions&lt;/em&gt; to access the real nodes in the host cluster because doing so would require a cluster role and cluster role binding, those &lt;em&gt;false&lt;/em&gt; nodes are produced. Additionally, each node will get a false &lt;code&gt;kubelet&lt;/code&gt; endpoint that will either send requests to the real node &lt;em&gt;or rewrite them&lt;/em&gt; to keep virtual cluster names intact.&lt;/p&gt;

&lt;p&gt;Vcluster supports multiple modes to customize node syncing behavior. For a detailed list of the resources that may have been synced, &lt;a href="https://www.vcluster.com/docs/architecture/synced-resources" rel="noopener noreferrer"&gt;see details here&lt;/a&gt; in the docs.&lt;/p&gt;

&lt;p&gt;The hypervisor also proxies some Kubernetes API calls, including pod port forwarding or container command execution, to the host cluster in addition to synchronizing virtual and host cluster resources. It essentially performs the function of the virtual cluster's reverse proxy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiay4g5cnqmdlgak8mxih.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiay4g5cnqmdlgak8mxih.png" alt=" " width="766" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To ensure proper network operation for the virtual cluster, resources like &lt;code&gt;Service&lt;/code&gt; and &lt;code&gt;Ingress&lt;/code&gt; are synced by default &lt;em&gt;from&lt;/em&gt; the virtual cluster [down] &lt;em&gt;to&lt;/em&gt; the host cluster.&lt;/p&gt;
&lt;h2&gt;
  
  
  There are never too many levels of abstraction
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4inkwzd1ss3rnja32wie.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4inkwzd1ss3rnja32wie.png" alt=" " width="711" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Certain resources (such as CRDs or RBAC policies) reside &lt;em&gt;cluster-wide&lt;/em&gt;, and you can’t isolate them using &lt;em&gt;namespaces&lt;/em&gt;. For instance, it is not feasible to install an operator simultaneously in multiple versions inside a same cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl api-resources --namespaced=false|true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Although Kubernetes itself already offers namespaces for various settings, their use of cluster-scoped resources and the control plane is constrained.&lt;/p&gt;

&lt;p&gt;In many circumstances, virtual clusters are also more stable than namespaces. In its own data store, the virtual cluster produces its own Kubernetes resource objects. These resources are unknown to the host cluster.&lt;/p&gt;

&lt;p&gt;This kind of isolation is good for resilience. The necessity for access to cluster-scoped resources like cluster roles, shared CRDs, or persistent volumes still exists for engineers who adopt namespace-based isolation. Each team that depends on one of these shared resources will probably experience failure if an engineer destroys something in it.&lt;/p&gt;

&lt;p&gt;Finally, virtual cluster configuration &lt;em&gt;is independent of physical&lt;/em&gt; cluster configuration. This is excellent for multi-tenancy because it allows you to easily create a fresh environment or amazing demo applications. 😎&lt;/p&gt;

&lt;h2&gt;
  
  
  How it looks in your CLI
&lt;/h2&gt;

&lt;p&gt;Create file &lt;code&gt;vcluster.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;vcluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rancher/k3s:v1.23.5-k3s1&lt;/span&gt;   &lt;span class="c1"&gt;# Choose k3s version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, install helm chart using vcluster.yaml for chart values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm upgrade --install my-vcluster vcluster \
  --values vcluster.yaml \
  --repo https://charts.loft.sh \
  --namespace host-namespace-1 \
  --repository-config=''
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Access:
&lt;/h3&gt;

&lt;p&gt;Get the admin tool&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -s -L "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-linux-amd64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o vcluster &amp;amp;&amp;amp; chmod +x vcluster;
sudo mv vcluster /usr/local/bin;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, connect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Connect and switch the current context to the vcluster
vcluster connect my-vcluster -n my-vcluster

# Switch back context
vcluster disconnect
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You have an option to create a separate &lt;code&gt;kubeconfig&lt;/code&gt; to use instead of changing the current context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vcluster connect my-vcluster --update-current=false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or you may execute a command directly with &lt;code&gt;vcluster&lt;/code&gt; context without changing the &lt;em&gt;current&lt;/em&gt; context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vcluster connect my-vcluster -- kubectl get namespaces
vcluster connect my-vcluster -- bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Usage:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Run any kubectl, helm, etc. command in your vcluster
kubectl get namespace
kubectl get pods -n kube-system
kubectl create namespace demo-nginx
kubectl create deployment nginx-deployment -n demo-nginx --image=nginx
kubectl get pods -n demo-nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Cleanup:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm delete my-vcluster -n vcluster-my-vcluster --repository-config=''
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What if you're planning some serious thing?
&lt;/h2&gt;

&lt;p&gt;Well, the stock K8s distribution &lt;em&gt;is compatible with high availability&lt;/em&gt; in &lt;code&gt;vcluster&lt;/code&gt;. What is meant by high availability? Well, one of the entities is to make &lt;code&gt;etcd&lt;/code&gt; database more robust. The second one is to boost syncer's performance. As mentioned above, &lt;code&gt;vcluster&lt;/code&gt; uses a so-called syncer which copies the pods that are created within the virtual cluster to the underlying host cluster. &lt;/p&gt;

&lt;p&gt;🪲&lt;strong&gt;TL;DR #1:&lt;/strong&gt; &lt;code&gt;etcd&lt;/code&gt; uses a leader-based consensus protocol for consistent data replication and log execution. Etcd cluster members elect a single leader, all other members become followers. The cluster elects a new leader automatically when one falls out of favor. Once the incumbent fails, the election does not take place immediately. Since the failure detection methodology is timeout based, electing a new leader takes roughly an election timeout. &lt;/p&gt;

&lt;p&gt;🪲&lt;strong&gt;TL;DR #2:&lt;/strong&gt; Why it is recommended to have &lt;em&gt;a minimum of three instances&lt;/em&gt; in an etcd cluster, is &lt;a href="https://etcd.io/docs/v3.5/faq/" rel="noopener noreferrer"&gt;well described here&lt;/a&gt;, a first hand info.&lt;/p&gt;

&lt;p&gt;Currently, vcluster's high availability setup does not allow single binary distributions like &lt;code&gt;k0s&lt;/code&gt; and &lt;code&gt;k3s&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Create a values.yaml with the following structure in order to operate &lt;code&gt;vcluster&lt;/code&gt; in high availability mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Enable HA mode&lt;/span&gt;
&lt;span class="na"&gt;enableHA&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;span class="c1"&gt;# Scale up syncer replicas&lt;/span&gt;
&lt;span class="na"&gt;syncer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;

&lt;span class="c1"&gt;# Scale up etcd&lt;/span&gt;
&lt;span class="na"&gt;etcd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;

&lt;span class="c1"&gt;# Scale up controller manager&lt;/span&gt;
&lt;span class="na"&gt;controller&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;

&lt;span class="c1"&gt;# Scale up api server&lt;/span&gt;
&lt;span class="na"&gt;api&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;

&lt;span class="c1"&gt;# Scale up DNS server&lt;/span&gt;
&lt;span class="na"&gt;coredns&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  To summarize
&lt;/h2&gt;

&lt;p&gt;A virtual Kubernetes cluster that is fully functioning can be built using &lt;code&gt;vcluster&lt;/code&gt;! The underlying K8s cluster's namespace is where each &lt;code&gt;vcluster&lt;/code&gt; operates. It provides better multi-tenancy and isolation than conventional namespaces, and it is less expensive than building independent, fully-fledged clusters.&lt;/p&gt;

&lt;p&gt;Virtual clusters can be a good option to running numerous instances of &lt;code&gt;k3s&lt;/code&gt;, or &lt;code&gt;k0s&lt;/code&gt; side by side, but they &lt;em&gt;cannot exist on their own without a host&lt;/em&gt; cluster. &lt;/p&gt;

&lt;p&gt;Compared to fully independent Kubernetes clusters, they are faster, lighter, and simpler to reach. Therefore, give using a virtual cluster a shot &lt;a href="https://komodor.com/learn/git-revert-rolling-back-in-gitops-and-kubernetes/" rel="noopener noreferrer"&gt;if you're tired&lt;/a&gt; of having to reset your local or CI/CD Kubernetes clusters all the time. However, this is a topic for a completely different story, much sadder than what you just read.&lt;/p&gt;

&lt;p&gt;Be in good &amp;amp; non-ghost shape! 👻&lt;/p&gt;

&lt;p&gt;Many thanks to Viktor 🐦@vfarcic Farcic and Mauricio 🐦&lt;a class="mentioned-user" href="https://dev.to/salaboy"&gt;@salaboy&lt;/a&gt; Salatino for inspiration!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>productivity</category>
      <category>k3s</category>
    </item>
    <item>
      <title>How to Stop Rampant Kubernetes Cluster Growth</title>
      <dc:creator>Roman Belshevitz</dc:creator>
      <pubDate>Thu, 25 Aug 2022 18:20:00 +0000</pubDate>
      <link>https://forem.com/otomato_io/how-to-stop-rampant-kubernetes-cluster-growth-4eip</link>
      <guid>https://forem.com/otomato_io/how-to-stop-rampant-kubernetes-cluster-growth-4eip</guid>
      <description>&lt;h2&gt;
  
  
  Some lyrics as an introduction
&lt;/h2&gt;

&lt;p&gt;The Edvard Munch's famous painting "Scream" was first presented to the public at the Berlin exhibition in December 1893. It was conceived as part of the &lt;a href="https://www.dailyartmagazine.com/edvard-munch-and-the-frieze-of-life/" rel="noopener noreferrer"&gt;"Frieze of Life"&lt;/a&gt; - a program cycle of paintings about the spiritual life of a person. Munch wrote about him: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“The Frieze of Life” is conceived as a series of paintings connected with each other, which together should give a description of a whole life. A winding line of the coast passes through the picture, behind it is the sea, it is always in motion, and under the crowns of trees there is a diverse life with its sorrows and joys. Frieze is conceived as a poem about life, love and death.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The author of this brief, of course, will not talk about the spiritual life, but about practical approaches that prevent thoughts about the terrible and otherworldly and save the nerves of engineers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The essence of the Ops' problem
&lt;/h2&gt;

&lt;p&gt;Kubernetes was originally designed to support the consolidation of workloads on a &lt;em&gt;single&lt;/em&gt; cluster. However, there are many problematic scenarios that require a &lt;em&gt;multi-cluster approach&lt;/em&gt; to optimize performance. These may include workloads across regions, fault propagation radius limits, compliance issues, harsh multi-user environments, security, and custom software solutions.&lt;/p&gt;

&lt;p&gt;Unfortunately, this multi-cluster approach poses management challenges, as the complexity of managing a Kubernetes cluster only increases as the size of the cluster increases. The end result is a phenomenon called &lt;em&gt;cluster sprawl&lt;/em&gt;, which occurs when the number of clusters and workloads grows and is not managed coherently.&lt;/p&gt;

&lt;p&gt;The solution to this problem lies in the early and rapid identification and implementation of the best management practices in order to avoid serious work in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Kubernetes governance?
&lt;/h2&gt;

&lt;p&gt;In order to ensure accountability, transparency, and responsibility, a well-defined collection of rules, policies, and procedures is referred to as governance.&lt;/p&gt;

&lt;p&gt;Governance is also about synchronizing clusters and providing centralized policy management. Kubernetes' governance is defined as a set of rules created with policies that need to be enforced across all clusters. This is a critical component for large enterprises running Kubernetes.&lt;/p&gt;

&lt;p&gt;Typically, this process means applying matching rules across Kubernetes multi-clusters, as well as applications running in those clusters. And while managing Kubernetes may seem insignificant, it pays off in the long run, especially if implemented in a large organization.&lt;/p&gt;

&lt;p&gt;Assume that the enterprise continues to increase the number of clusters in use and does not apply management. These clusters will exist under different rules, which will create a huge amount of extra work for the teams in the near future.&lt;/p&gt;

&lt;p&gt;Fortunately, there are only a few very important components to building a successful Kubernetes governance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating successful Kubernetes governance
&lt;/h2&gt;

&lt;p&gt;When considering a successful Kubernetes governance strategy, the first component is to ensure good multi-cluster management and monitoring. You must maintain control over how and where clusters are created and configured, as well as which software versions can be used.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frp9fax3ns5gh9szpzt9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frp9fax3ns5gh9szpzt9u.png" alt=" " width="500" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  🔧 Well-built observabilty
&lt;/h3&gt;

&lt;p&gt;Application development and operations teams should be able to centrally view and manage clusters to better optimize resources and troubleshoot. Solutions in this area are developed, for example, by &lt;a href="https://www.redhat.com/en/technologies/management/advanced-cluster-management" rel="noopener noreferrer"&gt;Red Hat&lt;/a&gt;, &lt;a href="https://platform9.com/blog/eks-plug-and-play-centralized-management-of-your-applications-across-aws-eks-clusters/" rel="noopener noreferrer"&gt;Platform9&lt;/a&gt;,  &lt;a href="https://polaris.docs.fairwinds.com/" rel="noopener noreferrer"&gt;Fairwinds&lt;/a&gt; and even &lt;a href="https://github.com/rancher/opni" rel="noopener noreferrer"&gt;Rancher Labs&lt;/a&gt;. Improved management practices and greater transparency can also save a company from the headaches of a range of security risks and performance issues down the road.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔧 RBAC strategies
&lt;/h3&gt;

&lt;p&gt;Next, enterprises must have an authentication and access control system in place. Having centralized authentication and authorization will help an organization streamline the login process and help keep track of user activity. This will allow application development and operations teams &lt;a href="https://www.techtarget.com/searchitoperations/tutorial/Be-selective-with-Kubernetes-RBAC-permissions" rel="noopener noreferrer"&gt;to ensure that the right people&lt;/a&gt; are doing important tasks in real time.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔧 Policy management
&lt;/h3&gt;

&lt;p&gt;Finally, to govern Kubernetes, enterprises must optimize policy management. Companies need to think about how Kubernetes will impact their development culture and work on finding the right balance of business agility and development. Ultimately, governance (with the appropriate level of flexibility) ensures that businesses can meet customer needs and deploy mission-critical services in a consistent and consistent manner.&lt;/p&gt;

&lt;p&gt;In Kubernetes, Admission Controllers enforce policies on objects during create, update, and delete operations. Admission control is fundamental to policy enforcement in Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/blog/2019/03/21/a-guide-to-kubernetes-admission-controllers/" rel="noopener noreferrer"&gt;Admission controllers&lt;/a&gt; allow you to enforce the adherence to certain practices such as having good labels, annotations, resource limits, or other settings.&lt;/p&gt;

&lt;p&gt;Being the CNCF project, Open Policy Agent (OPA) is a great tool to develop and implement such policies at scale throughout an organization. Every request will go through the OPA, as illustrated below, and will be decided depending on the policies established for the Kubernetes cluster. The request will be carried out if it complies with the policy. The OPA will reject the request if it violates the established policies.&lt;/p&gt;

&lt;p&gt;As a good practice, by &lt;a href="https://www.openpolicyagent.org/docs/latest/kubernetes-introduction/#how-does-it-work-with-plain-opa-and-kube-mgmt" rel="noopener noreferrer"&gt;deploying OPA&lt;/a&gt; as an admission controller, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Require specific labels on all resources.&lt;/li&gt;
&lt;li&gt;Require container images come from the corporate image registry.&lt;/li&gt;
&lt;li&gt;Require all pods specify resource requests and limits.&lt;/li&gt;
&lt;li&gt;Prevent conflicting Ingress objects from being created.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjlt13feinocj5p1hkmq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjlt13feinocj5p1hkmq.png" alt=" " width="789" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Goals to achieve
&lt;/h2&gt;

&lt;p&gt;But what should be the goals of governance? Where should it be enforced and tested? The four most effective management objectives are security policy, network management, access control, and image management. Let's look at each of these goals one by one:&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 Security policy
&lt;/h3&gt;

&lt;p&gt;In security policies for governing Kubernetes, it is important to restrict user access to pods in clusters. Cluster users should have well-defined access based on their role.&lt;/p&gt;

&lt;p&gt;To do this, enterprises must implement a security policy that will have rules and conditions related to access and privileges. In this policy, they must specify that containers have read-only access to the file system and that containers and child processes cannot be subject to privilege changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 Network management
&lt;/h3&gt;

&lt;p&gt;Network policy plays a very important role in determining which services can communicate with each other. Here, companies must determine which modules and services can interact with each other and which should be isolated. This also applies to module security in Kubernetes management.&lt;/p&gt;

&lt;p&gt;The right approach is aimed at controlling traffic within Kubernetes clusters. This approach can be based on modules, namespaces, or IPs, depending on management requirements.&lt;/p&gt;

&lt;p&gt;Each popular CNI plugin uses a different type of configuration for the network setup. For example, &lt;a href="https://projectcalico.docs.tigera.io/networking/determine-best-networking" rel="noopener noreferrer"&gt;Calico&lt;/a&gt; uses layer 3 networking paired with the BGP routing protocol to connect pods.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/otomato_io/cilium-ebpf-powered-cni-a-nos-solution-for-modern-clouds-1hl1"&gt;Cilium&lt;/a&gt; configures an overlay network with eBPF on layers 3 to 7. Along with Calico, Cilium supports setting up network policies to restrict traffic.&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 Administration and access control
&lt;/h3&gt;

&lt;p&gt;In access control, when configuring role-based access control (RBAC) policy, administrators need to restrict access to cluster resources. Using Kubernetes objects such as &lt;code&gt;Role&lt;/code&gt;, &lt;code&gt;ClusterRole&lt;/code&gt;, &lt;code&gt;RoleBinding&lt;/code&gt;, and &lt;code&gt;ClusterRoleBinding&lt;/code&gt;, they need to fine-tune access to cluster resources appropriately.&lt;/p&gt;

&lt;p&gt;Because permissions granted by a &lt;code&gt;ClusterRole&lt;/code&gt; apply across the entire cluster, you can use &lt;code&gt;ClusterRole&lt;/code&gt;s to control access to different kinds of resources than you can with &lt;code&gt;Role&lt;/code&gt;s. These include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cluster-scoped resources such as nodes&lt;/li&gt;
&lt;li&gt;Non-resource REST Endpoints &lt;a href="https://kubernetes.io/docs/reference/using-api/health-checks/" rel="noopener noreferrer"&gt;such as&lt;/a&gt; &lt;code&gt;/healthz&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Namespaced&lt;/em&gt; resources across all Namespaces (for example, all Pods across the entire cluster, regardless of Namespace).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After creating a &lt;code&gt;Role&lt;/code&gt; or &lt;code&gt;ClusterRole&lt;/code&gt;, &lt;a href="https://learnk8s.io/rbac-kubernetes" rel="noopener noreferrer"&gt;you have to assign it&lt;/a&gt; to a user or group of users by creating a &lt;code&gt;RoleBinding&lt;/code&gt; or &lt;code&gt;ClusterRoleBinding&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRoleBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;testadminclusterbinding&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myaccount&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
&lt;span class="na"&gt;roleRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-admin&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🎯 Image management
&lt;/h3&gt;

&lt;p&gt;Using public Docker images can increase the speed and flexibility of application development, but there are many vulnerable Docker images, and using them in a production cluster can be very risky.&lt;/p&gt;

&lt;p&gt;Image management is also part of Kubernetes governance. All images that will be used in the cluster must be pre-scanned for vulnerabilities. There are several approaches to finding vulnerabilities. How and where an organization checks for vulnerabilities depends on its preferred workflows. However, it is recommended that you test your images before deploying them to a cluster.&lt;/p&gt;

&lt;p&gt;Hacker activity has increased exponentially in recent years, and loopholes in systems continue to be discovered. Therefore, it is very important for companies to be vigilant when implementing practices to ensure that they only use official, clean, and verified Docker images on a cluster.&lt;/p&gt;

&lt;p&gt;Threat actors can mount sophisticated assaults employing previously dependable third-party artifacts as an attack vector by using malicious scripts or malware concealed in a container image. Static, pattern-based, or signature-based scanners are not effective against this kind of attack because it only appears during runtime.&lt;/p&gt;

&lt;p&gt;By evaluating the attack kill chain and running images in a secure hosted sandbox environment, several security solutions can reduce this risk. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa3wpii8ine0jptfqydk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa3wpii8ine0jptfqydk.png" alt=" " width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To examine images in a running state both before and after the image is checked into a registry, these tools, i.e. &lt;a href="https://github.com/aquasecurity/trivy" rel="noopener noreferrer"&gt;trivy&lt;/a&gt; by Aqua Security, are frequently &lt;a href="https://github.com/aquasecurity/trivy-action" rel="noopener noreferrer"&gt;incorporated into CI/CD&lt;/a&gt; processes. Malicious behavior and unfulfilled policy requirements can mark an image for registry deletion or prevent check-in entirely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Instead of conclusion
&lt;/h2&gt;

&lt;p&gt;Thus, the author has given in brief the directions needed to better govern Kubernetes and ensure the security of important enterprise systems and data, as well as to limit cluster growth and possible disorder. Stay strong and focused!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The author is thankful to Arthur Chiao, Oleg Chunikhin (CNCF), Tomas Fernandez (Rendered Text / Semaphore), Mike Jordan (Coredge), Kristijan Mitevski and Steven Zimmerman (Aqua Security) for their contribution to comunity.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>profuctivity</category>
      <category>team</category>
    </item>
    <item>
      <title>How to Shut Down Kubernetes Pod Gracefully</title>
      <dc:creator>Roman Belshevitz</dc:creator>
      <pubDate>Wed, 17 Aug 2022 21:33:00 +0000</pubDate>
      <link>https://forem.com/otomato_io/how-to-shut-down-kubernetes-pod-gracefully-on6</link>
      <guid>https://forem.com/otomato_io/how-to-shut-down-kubernetes-pod-gracefully-on6</guid>
      <description>&lt;h2&gt;
  
  
  Essence of the question
&lt;/h2&gt;

&lt;p&gt;As the operating processes on your cluster are represented by pods, Kubernetes offers graceful termination when pods are no longer required. By imposing a default grace period of 30 seconds after you submit a termination request, Kubernetes offers graceful termination. &lt;/p&gt;

&lt;p&gt;The steps listed below make up a typical Kubernetes Pod termination:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To end the Pod, you send a command or make an API call.&lt;/li&gt;
&lt;li&gt;The duration of time after which a Pod is to be regarded as &lt;em&gt;dead&lt;/em&gt; is reflected in Kubernetes changes the Pod status (the time of the termination request plus the grace period).&lt;/li&gt;
&lt;li&gt;When a pod enters the &lt;em&gt;Terminating&lt;/em&gt; state, Kubernetes stops transmitting traffic to it.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;SIGTERM&lt;/code&gt; signal from Kubernetes instructs the Pod to stop operating.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pods can be terminated for a variety of reasons over the lifecycle of an application. In Kubernetes, these reasons include user input via &lt;code&gt;kubectl delete&lt;/code&gt; or system upgrades, among others. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2eo9p3ov8qi8d8pg65j7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2eo9p3ov8qi8d8pg65j7.png" alt=" " width="800" height="385"&gt;&lt;/a&gt;&lt;br&gt;
🖼️ A larger picture is &lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2eo9p3ov8qi8d8pg65j7.png" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Also, you may open it in a new browser's tab to zoom in.&lt;/p&gt;

&lt;p&gt;On the other hand, a resource problem could lead to its termination. &lt;/p&gt;
&lt;h2&gt;
  
  
  Misunderstood action
&lt;/h2&gt;

&lt;p&gt;With some configuration, Kubernetes in this scenario enables gentle termination of the operating containers in the Pod. Let's first understand how the delete / termination process proceeds before moving on to the setup.&lt;/p&gt;

&lt;p&gt;Once the user issues the &lt;code&gt;kubectl delete&lt;/code&gt; command, the command is transmitted to the API server, where it is removed from the endpoints object. As we saw when creating the pod, the endpoint is crucial to receive updates when providing any services.&lt;/p&gt;

&lt;p&gt;In this action, the endpoint will be immediately removed from the control plane while readiness probes are disregarded. This will start events on the DNS, ingress controller, and &lt;code&gt;kube-proxy&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;As a result, all of those components update their references and stop forwarding traffic to the IP address. Please be aware that while this procedure may be speedy, the component may occasionally be preoccupied with other tasks. As a result, a delay might be anticipated, and the reference won't be updated right once.&lt;/p&gt;

&lt;p&gt;At the same time, the Pod's status in &lt;code&gt;etcd&lt;/code&gt; is changed to &lt;em&gt;Terminating&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The polling alerts Kubelet, which then delegates the activity to components like pod creation. &lt;/p&gt;

&lt;p&gt;Here&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By using the Container Storage Interface to unmount all volumes from the container (CSI).&lt;/li&gt;
&lt;li&gt;Relinquishing the IP address to the Container Network Interface and disconnecting the container from the network (CNI).&lt;/li&gt;
&lt;li&gt;To the Container Runtime Interface, destroy the container (CRI).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: Kubernetes updates the endpoints after waiting for the &lt;code&gt;kubelet&lt;/code&gt; update to provide the IP data during the Pod creation. However, when the Pod terminates, it simultaneously updates the &lt;code&gt;kubelet&lt;/code&gt; and &lt;em&gt;removes&lt;/em&gt; the endpoint.&lt;/p&gt;
&lt;h2&gt;
  
  
  Premature termination?
&lt;/h2&gt;

&lt;p&gt;How is this a problem? The hitch, however, is that sometimes it takes some time for components to update endpoints. In this situation, if the pod is killed before endpoints have been propagated, we would experience downtime. Yet why?&lt;/p&gt;

&lt;p&gt;As previously indicated, ingress and other high-level services are still not changed, thus traffic is still forwarded to the removed pod. However, we might believe that Kubernetes should update the modifications throughout the cluster and prevent such a problem.&lt;/p&gt;

&lt;p&gt;But it unquestionably is not.&lt;/p&gt;

&lt;p&gt;Kubernetes does not validate that the changes on the components are current because it distributes the endpoints using endpoint objects and sophisticated abstractions like &lt;a href="https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/" rel="noopener noreferrer"&gt;Endpoint Slices&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4gn8v4wuh19f35oycyor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4gn8v4wuh19f35oycyor.png" alt=" " width="696" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We cannot guarantee a 100% application uptime due to this possibility of downtime. The only way to accomplish this is to wait until the Pod is destroyed &lt;em&gt;before&lt;/em&gt; updating the endpoint. We made assumptions based solely on what we observed, but is that really possible? Let's investigate.&lt;/p&gt;
&lt;h2&gt;
  
  
  API magic?
&lt;/h2&gt;

&lt;p&gt;For that, we must have a thorough understanding of what transpires in containers when the delete command is sent.&lt;/p&gt;

&lt;p&gt;Pod receives the &lt;code&gt;SIGTERM&lt;/code&gt; signal after getting &lt;code&gt;kubectl delete&lt;/code&gt;. The &lt;code&gt;SIGTERM&lt;/code&gt; signal is sent by default by Kubernetes, which then waits 30 seconds before forcibly ending the process. Therefore, we can enable a setting that requires us to wait before acting, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prior to leaving, wait a while.&lt;/li&gt;
&lt;li&gt;For another 10 to 20 seconds, the traffic will still be processed.&lt;/li&gt;
&lt;li&gt;Close all backend connections, including those to databases and WebSockets.&lt;/li&gt;
&lt;li&gt;At last, finish the procedure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can add or modify &lt;code&gt;terminationGracePeriodSeconds&lt;/code&gt; in your pod definition if your application needs longer time (more than 30 seconds) to terminate.&lt;/p&gt;

&lt;p&gt;You can add a script that will wait for a while before exiting. In this instance, Kubernetes exposes &lt;a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="noopener noreferrer"&gt;a pre-stop hook&lt;/a&gt; in the pod before the &lt;code&gt;SIGTERM&lt;/code&gt; was executed. You can implement this as the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
      &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
          &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;lifecycle&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;preStop&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;exec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sleep"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This option would make the &lt;code&gt;kubelet&lt;/code&gt; wait for 30 seconds before advancing the SIGTERM, although it should be noted that this might not be enough since your application might still be handling some older requests. How do you avoid them? This can be done by including &lt;code&gt;terminationGracePeriodSeconds&lt;/code&gt;, which will cause the container to wait longer before being terminated. The final manifest will appear:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
      &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
          &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;lifecycle&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;preStop&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;exec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sleep"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;terminationGracePeriodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;45&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe34w8i857rttigaidlxn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe34w8i857rttigaidlxn.png" alt=" " width="696" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Command line hacks
&lt;/h2&gt;

&lt;p&gt;This setting ought to make it easier for the app to handle all requests and shut down connections. This will prevent a shutdown that is forced.&lt;/p&gt;

&lt;p&gt;When manually deleting a resource, you may also modify the default grace period by providing the &lt;code&gt;--grace-period=SECONDS&lt;/code&gt; argument to the &lt;code&gt;kubectl delete&lt;/code&gt; command. For instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubectl delete deployment test --grace-period=60
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What about the Rolling updates?&lt;/p&gt;

&lt;p&gt;Pods are removed for yet another reason when we upgrade or deploy a new version. What happens if you upgrade your app from, let's say, v1.1 to v1.2 &lt;em&gt;while&lt;/em&gt; operating a v1.1 version with 3 replicas?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Pod is created using the new container image.&lt;/li&gt;
&lt;li&gt;Eliminates a current Pod.&lt;/li&gt;
&lt;li&gt;Awaits the completion of the Pod.&lt;/li&gt;
&lt;li&gt;Until every pod has been transferred to the new version, repeat.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ok, this ensures the deployment of the new version. But what about old pods? Does Kubernetes wait until all the pods have been deleted? &lt;/p&gt;

&lt;p&gt;The answer is no.&lt;/p&gt;

&lt;p&gt;The old version pods will be gracefully terminated and eliminated as it moves forward. However, as old ones are continuously being removed, there may occasionally be &lt;em&gt;twice as many&lt;/em&gt; Pods.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting an end to ongoing processes
&lt;/h2&gt;

&lt;p&gt;Even though we have taken all necessary precautions, some apps or WebSockets may require prolonged service, or we may be unable to halt if any lengthy operations are in progress or requests are being made. Rolling updates will be at danger throughout that period. How can we overcome?&lt;/p&gt;

&lt;p&gt;There are two options.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;terminationGracePeriodSeconds&lt;/code&gt; can be increased to a few hours. Or modifying a current deployment, or starting a new one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 1
&lt;/h3&gt;

&lt;p&gt;If you choose to do so, the pod's endpoint will be out of reach. Also take note that you must manually monitor those pods and cannot use any monitoring software to track them. All monitoring programs gather data from endpoints, and once that data is withdrawn, all monitoring tools will behave similarly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 2
&lt;/h3&gt;

&lt;p&gt;Your old deployment will still be there when you establish the new one. Therefore, all the lengthy processes &lt;em&gt;will continue to run&lt;/em&gt; until they are finished. You manually eliminate the previous processes after you can see that they have finished.&lt;/p&gt;

&lt;p&gt;You can configure an autoscaler to scale your deployment to zero replicas (third-party tools like KEDA can &lt;a href="https://keda.sh/docs/1.4/concepts/scaling-deployments/" rel="noopener noreferrer"&gt;simplify this&lt;/a&gt;) when they run out of jobs if you want to eliminate them automatically. Furthermore, you can keep earlier pods running for longer than the grace period by using this every time.&lt;/p&gt;

&lt;p&gt;A less obvious but superior option is to start a fresh Deployment for each update. While the most recent Deployment serves the new users, current users can keep using updates. You can gradually reduce the replication and retire old Deployments as users disconnect from old Pods.&lt;/p&gt;

&lt;p&gt;The author hopes this is a helpful article. Consider your options and choose the option that best satisfies your needs!&lt;/p&gt;

&lt;p&gt;Pictures courtesy 🐦 @motoskia &amp;amp; 🐦 @foxutech&lt;/p&gt;

&lt;p&gt;Some great schematic diagrams can be found &lt;a href="https://learnk8s.io/graceful-shutdown" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Thanks, Daniele Polencic.&lt;/p&gt;

&lt;p&gt;More to read: How Kubernetes Reinvented Virtual Machines (in a good sense), a &lt;a href="https://iximiuz.com/en/posts/kubernetes-vs-virtual-machines/" rel="noopener noreferrer"&gt;great article&lt;/a&gt; by Ivan Velichko.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Datree, a Tool Which Really Shifts Your Cluster Security Even More Left</title>
      <dc:creator>Roman Belshevitz</dc:creator>
      <pubDate>Fri, 05 Aug 2022 11:45:00 +0000</pubDate>
      <link>https://forem.com/otomato_io/datree-a-tool-which-really-shifts-your-cluster-security-even-more-left-1g20</link>
      <guid>https://forem.com/otomato_io/datree-a-tool-which-really-shifts-your-cluster-security-even-more-left-1g20</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;The idea of writing this article was born by the author after getting acquainted with two entities: &lt;a href="https://dev.to/otomato_io/kubescape-a-kind-insurance-inspector-for-your-kubernetes-investments-3hb6"&gt;this&lt;/a&gt; and &lt;a href="https://github.com/datreeio/awesome-datree" rel="noopener noreferrer"&gt;this repository&lt;/a&gt;. 😎&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  So, noone wants to throw poop in the pot
&lt;/h2&gt;

&lt;p&gt;As part of the GitOps approach, increasingly used today, for the majority of the time, you will need to update your Helm charts (Deployment, Service, or Pod) or YAML manifests for Kubernetes and then either immediately apply the changes to the Production, Staging, or Test environments, depending on the situation.&lt;/p&gt;

&lt;p&gt;Worrying about whether the new Kubernetes manifest YAML modification which you're deploying in the production environment? Will it actually work during a release or deployment?&lt;/p&gt;

&lt;p&gt;Well, a slightly nervous situation, &lt;a href="https://twitter.com/chiefmartec/status/795224543967191041" rel="noopener noreferrer"&gt;illustrated&lt;/a&gt; (the author just remembered this) about six years ago.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fewi3dieo66dnwr55rfft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fewi3dieo66dnwr55rfft.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No matter how skilled or experienced you are as an engineer, this will be a scary experience if you are unsure about your changes. &lt;/p&gt;

&lt;p&gt;So, is there a way to validate your YAML manifests and Helm charts before they are used in production? Fortunately, the answer is “yes”. A tool named Datree can be used to &lt;em&gt;validate&lt;/em&gt; your Kubernetes manifest &lt;em&gt;before&lt;/em&gt; making any mods!&lt;/p&gt;

&lt;h2&gt;
  
  
  What makes Datree special
&lt;/h2&gt;

&lt;p&gt;While &lt;a href="https://dev.to/otomato_io/kubescape-a-kind-insurance-inspector-for-your-kubernetes-investments-3hb6"&gt;Kubescape&lt;/a&gt; is a utility that improves Kubernetes security by scanning clusters and detecting &lt;em&gt;already deployed&lt;/em&gt; YAML files that are not compliant with security standards, or subject to vulnerabilities &lt;em&gt;(that the engineer might not have known about)&lt;/em&gt;, Datree was made 💡&lt;em&gt;to prevent&lt;/em&gt; Kubernetes misconfigurations &lt;em&gt;from reaching production&lt;/em&gt; with Datree’s automated policy checks for your pipeline. &lt;/p&gt;

&lt;p&gt;Datree, the 5.8k⭐ open-source CLI tool empowers engineers to write more stable configurations, so they can actually sleep at night. It allows you &lt;a href="https://hub.datree.io/integrations" rel="noopener noreferrer"&gt;to integrate it into any CI flow&lt;/a&gt; and &lt;a href="https://github.com/datreeio/action-datree" rel="noopener noreferrer"&gt;trigger it whenever you want&lt;/a&gt;, such as each time team make a change or submit a pull request, for example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F389p1jefu1l5uvvm4078.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F389p1jefu1l5uvvm4078.png" alt=" " width="800" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 Is it safe in terms of data leaks?
&lt;/h3&gt;

&lt;p&gt;Creators' statements can be interpreted in such a way that the appraisal of Datree's policies is purely local. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febkn7dz52o0amou6z8uo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febkn7dz52o0amou6z8uo.png" alt=" " width="800" height="354"&gt;&lt;/a&gt;&lt;br&gt;
🖼️ A larger picture is &lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ebkn7dz52o0amou6z8uo.png" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Also, you may open picture in a new browser's tab to zoom in.&lt;/p&gt;

&lt;p&gt;Since the CLI runs the policy check on your system, your files and their contents are not transferred to their backend. To their backend, which is used to show your policy check history on your dashboard, the tool only sends metadata.&lt;/p&gt;

&lt;p&gt;What is not less important: to do checks, Datree does not need to be connected to the cluster. It has &lt;a href="https://hub.datree.io/setup/offline-mode" rel="noopener noreferrer"&gt;an offline mode&lt;/a&gt; as well.&lt;/p&gt;
&lt;h3&gt;
  
  
  🎯 What to verify?
&lt;/h3&gt;

&lt;p&gt;The following three assertions are verified by Datree:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is the file a well-composed YAML file?&lt;/li&gt;
&lt;li&gt;Schema for Kubernetes: is this a valid Kubernetes file?&lt;/li&gt;
&lt;li&gt;Is the file compliant with your Kubernetes policy?&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  🎯 Policy as code
&lt;/h3&gt;

&lt;p&gt;Policy-as-a-code, similar to Infrastructure-as-a-Code, is the concept of &lt;a href="https://hub.datree.io/setup/policy-as-code" rel="noopener noreferrer"&gt;using declarative code&lt;/a&gt; to replace actions that require using a user interface. Proven software development best practices, such as version control, collaboration, and automation, can be applied by encoding policies in code.&lt;/p&gt;
&lt;h3&gt;
  
  
  🎯 Centralized policy
&lt;/h3&gt;

&lt;p&gt;This idea refers to managing distributed policy execution from a single point. This gives the policy owner simple control over the rules that Datree evaluates during each run without adding more work to the operation. You can control the central policy by logging into the &lt;a href="https://app.datree.io/" rel="noopener noreferrer"&gt;dashboard&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  🎯 Rules' flexity
&lt;/h3&gt;

&lt;p&gt;To fit your preferences, you can toggle any of the &lt;a href="https://hub.datree.io/built-in-rules" rel="noopener noreferrer"&gt;50+ built-in rules&lt;/a&gt; "ON" or "OFF" in the dashboard. When a rule is turned on or off, any policy checks run against that policy will automatically be updated (via &lt;a href="https://hub.datree.io/setup/account-token" rel="noopener noreferrer"&gt;account token&lt;/a&gt;). This eliminates the need for the policy owner to manually update every device (cluster host) linked to the policy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqnglb4m4qtn15lilj6fm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqnglb4m4qtn15lilj6fm.png" alt=" " width="800" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can pick from dozens of tried-and-true rules that address various Kubernetes resources and use cases, which are tied to: containers, cron jobs, &lt;a href="https://kubernetes.io/docs/concepts/workloads/" rel="noopener noreferrer"&gt;workloads&lt;/a&gt; (running apps), networking, security, API deprecation, ArgoCD rollouts, CVEs described by the NSA and to other Kubernetes syntax nuances.&lt;/p&gt;
&lt;h3&gt;
  
  
  🎯 Custom rules
&lt;/h3&gt;

&lt;p&gt;You can &lt;a href="https://hub.datree.io/custom-rules/custom-rules-overview" rel="noopener noreferrer"&gt;write any tests you like&lt;/a&gt; and run them against your Kubernetes setups to check for rule violations in addition to the tool's built-in rules. The built-in rule engine supports both YAML and JSON declarative syntax because it is based on JSON Schema.&lt;/p&gt;
&lt;h2&gt;
  
  
  Installation and general usage
&lt;/h2&gt;

&lt;p&gt;All you need to do is run the command below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://get.datree.io | /bin/bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once installed, you can easily use Datree to check the security of Kubernetes manifests.&lt;/p&gt;

&lt;p&gt;The syntax is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;datree test [k8s-manifest-file]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When a check is started, it goes through 3 main stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;YAML validation;&lt;/li&gt;
&lt;li&gt;checking Kubernetes charts;&lt;/li&gt;
&lt;li&gt;checking Kubernetes policies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, when checking demo manifest, the command would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;datree test ~/.datree/k8s-demo.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's see the output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ datree test ~/.datree/k8s-demo.yaml
&amp;gt;&amp;gt;  File: .datree/k8s-demo.yaml

[V] YAML validation
[V] Kubernetes schema validation

[X] Policy check

❌  Ensure each container image has a pinned (tag) version  [1 occurrence]
    - metadata.name: rss-site (kind: Deployment)
💡  Incorrect value for key `image` - specify an image version to avoid unpleasant "version surprises" in the future

❌  Ensure each container has a configured liveness probe  [1 occurrence]
    - metadata.name: rss-site (kind: Deployment)
💡  Missing property object `livenessProbe` - add a properly configured livenessProbe to catch possible deadlocks

❌  Ensure each container has a configured memory limit  [1 occurrence]
    - metadata.name: rss-site (kind: Deployment)
💡  Missing property object `limits.memory` - value should be within the accepted boundaries recommended by the organization

❌  Ensure workload has valid label values  [1 occurrence]
    - metadata.name: rss-site (kind: Deployment)
💡  Incorrect value for key(s) under `labels` - the vales syntax is not valid so the Kubernetes engine will not accept it


(Summary)

- Passing YAML validation: 1/1

- Passing Kubernetes (1.20.0) schema validation: 1/1

- Passing policy check: 0/1

+-----------------------------------+-----------------------+
| Enabled rules in policy "Default" | 21                    |
| Configs tested against policy     | 1                     |
| Total rules evaluated             | 21                    |
| Total rules skipped               | 0                     |
| Total rules failed                | 4                     |
| Total rules passed                | 17                    |
| See all rules in policy           | https://app.datree.io |
+-----------------------------------+-----------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the output, you can see detailed information about the violations present in the manifest. This gives engineers the necessary guidance to resolve them.&lt;/p&gt;

&lt;p&gt;Each Datree policy check is performed using the default policy, which includes 50+ built-in rules.&lt;/p&gt;

&lt;p&gt;To configure the policy, you must return to the terminal and register by clicking on the link provided at the end of the output.&lt;/p&gt;

&lt;h3&gt;
  
  
  kubectl plugin
&lt;/h3&gt;

&lt;p&gt;This &lt;code&gt;kubectl&lt;/code&gt; &lt;a href="https://github.com/datreeio/kubectl-datree" rel="noopener noreferrer"&gt;plugin&lt;/a&gt; extends the Datree CLI's capabilities to allow scanning resources within your cluster for misconfigurations.&lt;/p&gt;

&lt;p&gt;The Kubectl plugin can be installed using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl krew install datree
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What you should see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl krew install datree
Updated the local copy of plugin index.
Installing plugin: datree
Installed plugin: datree
\
 | Use this plugin:
 |  kubectl datree
 | Documentation:
 |  https://github.com/datreeio/kubectl-datree
 | Caveats:
 | \
 |  | Before using this plugin, the Datree CLI needs to be installed.
 |  | See https://hub.datree.io/ for quick and easy installation.
 | /
/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's try to check our cluster's namespace (having &lt;a href="https://bitnami.com/stack/drupal/helm" rel="noopener noreferrer"&gt;this release&lt;/a&gt; by Bitnami deployed in it).&lt;/p&gt;

&lt;p&gt;Right now, the default K8s schema version that is checked when you run a policy check is &lt;code&gt;1.2&lt;/code&gt;. To check for deprecated APIs before deploying you K8s manifest, change the default &lt;a href="https://hub.datree.io/setup/schema-validation" rel="noopener noreferrer"&gt;K8s version in your dashboard&lt;/a&gt; to match your culster server version. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcppxxl3sqeeomvqvow2u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcppxxl3sqeeomvqvow2u.png" alt=" " width="633" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Author's cluster playground is k3s/k3d, so API version is&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ k3d version
k3s version v1.22.7-k3s1 (default)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another (and more common) approach:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl version --short
Server Version: v1.22.7+k3s1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🥁 Let's perform a scan routine with Datree now.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl datree test -s "1.22.7" -- service my-release-drupal

(Summary)

- Passing YAML validation: 1/1

- Passing Kubernetes (1.22.7) schema validation: 1/1

- Passing policy check: 1/1

+-----------------------------------+-----------------------+
| Enabled rules in policy "Default" | 21                    |
| Configs tested against policy     | 1                     |
| Total rules evaluated             | 21                    |
| Total rules skipped               | 0                     |
| Total rules failed                | 0                     |
| Total rules passed                | 21                    |
| See all rules in policy           | https://app.datree.io |
+-----------------------------------+-----------------------+

The following cluster resources in namespace 'default' were checked:

service/my-release-drupal

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You may see a very similar report on Datree's SaaS dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxnj1njfyinsuaiukrsf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxnj1njfyinsuaiukrsf.png" alt=" " width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If there are no rules failed, then you get a green mark! Well done.&lt;/p&gt;

&lt;h2&gt;
  
  
  helm plugin
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/datreeio/helm-datree" rel="noopener noreferrer"&gt;This plugin&lt;/a&gt; is used to check charts against Datree policy. The mentioned plugin can be installed using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm plugin install https://github.com/datreeio/helm-datree
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To run a Datree policy check on Helm charts, run a command with the following syntax.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm datree test [CHART_DIRECTORY]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you need to pass arguments to your template, add &lt;code&gt;--&lt;/code&gt; in front of them, as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm datree test [CHART_DIRECTORY] -- --values ​​values.yaml --set name=prod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  CI integration
&lt;/h2&gt;

&lt;p&gt;To integrate Datree into CI/CD, you can follow the example below. You need to follow these steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get your account token (you can find it in the dashboard's Settings).&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;DATREE_TOKEN&lt;/code&gt; as secret/environment variable&lt;/li&gt;
&lt;li&gt;Add Datree &lt;a href="https://github.com/datreeio/action-datree" rel="noopener noreferrer"&gt;to your CI flow&lt;/a&gt; via token as shown (i.e., for GitHub).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8tgs1wzf4pvb9hh3lml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8tgs1wzf4pvb9hh3lml.png" alt=" " width="800" height="112"&gt;&lt;/a&gt;&lt;br&gt;
🖼️ A larger picture is &lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a8tgs1wzf4pvb9hh3lml.png" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Here is the example of involving action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;main&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;main&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;DATREE_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DATREE_TOKEN }}&lt;/span&gt; 

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;k8sPolicyCheck&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run Datree Policy Check&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;datreeio/action-datree@main&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;**/*.yaml'&lt;/span&gt;
          &lt;span class="na"&gt;cliArguments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--only-k8s-files'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Instead of a conclusion
&lt;/h2&gt;

&lt;p&gt;This brief article on using Datree to perform security checks on Helm charts and Kubernetes manifests is now complete. The author hopes readers can all agree that Datree can be used to prevent configuration problems in Kubernetes that can result in production cluster failure. &lt;/p&gt;

&lt;p&gt;Wishing you to have clean YAMLs and green marks, folks! ✅&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dhx2hb4dwrcd66bjjqd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dhx2hb4dwrcd66bjjqd.png" alt=" " width="304" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The author expresses his sincere gratitude to Noaa 🐦@BarkiNoaa Barki, Eyar 🐦@eyarzilb Zilberman, Anais 🐦&lt;a class="mentioned-user" href="https://dev.to/urlichsanais"&gt;@urlichsanais&lt;/a&gt; Urlich and Scott 🐦@chiefmartec Brinker.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>gitops</category>
      <category>datree</category>
    </item>
  </channel>
</rss>
