<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: ylcnky</title>
    <description>The latest articles on Forem by ylcnky (@ylcnky).</description>
    <link>https://forem.com/ylcnky</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ylcnky"/>
    <language>en</language>
    <item>
      <title>Kubernetes Patters: The Sidecar</title>
      <dc:creator>ylcnky</dc:creator>
      <pubDate>Sun, 16 Jan 2022 20:44:12 +0000</pubDate>
      <link>https://forem.com/ylcnky/kubernetes-patters-the-sidecar-52de</link>
      <guid>https://forem.com/ylcnky/kubernetes-patters-the-sidecar-52de</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AIbs1rQH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://docs.microsoft.com/en-us/azure/architecture/patterns/_images/sidecar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AIbs1rQH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://docs.microsoft.com/en-us/azure/architecture/patterns/_images/sidecar.png" alt="image" width="705" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The basic idea on which UNIX was designed is to not have a complex monolithic tool do everything. Instead, UNIX makes use of small &lt;strong&gt;pluggable&lt;/strong&gt; components whereby their usage separately is not of great use. But when combined, they can perform powerful operations. Let's take the &lt;code&gt;ps&lt;/code&gt; command as an example; &lt;code&gt;ps&lt;/code&gt; on its own displays the currently running processes on your UNIX box. It has a decent number of flags that allows you to display many aspects of the process. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; The user that started a process&lt;/li&gt;
&lt;li&gt;How much CPU each running process is using&lt;/li&gt;
&lt;li&gt;What the command used to start the process is and a lot more&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;ps&lt;/code&gt; command does an excellent job displaying information about the running processes. However, there isn't any &lt;code&gt;ps&lt;/code&gt; flag that filters its output. The lack of this functionality is not a missing feature in the tool; this is intentional.&lt;/p&gt;

&lt;p&gt;There is another tool that does an excellent job filtering fed into it: &lt;code&gt;grep&lt;/code&gt;. So, using the pipe &lt;code&gt;|&lt;/code&gt; character, you can filter the output of &lt;code&gt;ps&lt;/code&gt; to show only the SSH proesses running on your sustem like this : &lt;code&gt;ps -ef | grep -i ssh&lt;/code&gt;. The &lt;code&gt;ps&lt;/code&gt; tool is concerned with displaying each and every possible aspect of running processes. The &lt;code&gt;grep&lt;/code&gt; command is concerned with offering the ability of filtering text, any text in many different ways.&lt;/p&gt;

&lt;p&gt;Because of both UNIX power and simplicity, this principle was used in many other domains in addition to operating systems. In Kubernetes, for example, each container should do only one job and do it well. You might want to ask that what if the container's job requires extra procedures to aid it or enhance it? there is nothing to worry about because the same way we piped the output of the &lt;code&gt;ps&lt;/code&gt; command to &lt;code&gt;grep&lt;/code&gt;, we can use another container sitting beside the main one in the same Pod. That second container carries out the auxiliary logic needed by the first container to function correctly. That second container is commonly known as &lt;strong&gt;Sidecar&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Does a Sidecar Container Do?
&lt;/h2&gt;

&lt;p&gt;A Pod is the basic atomic unit of deployment in Kubernetes. Typically, a Pod contains a single container. However, multiple containers can be placed in the same Pod. All containers running on the same Pod share the same volume and network interface of the Pod. Actuallay, the Pod itself is a container that executes the pause command. Its sole purpose is to hold the network interfaces and the Linux namespaces to run other containers. A Sidecar container is a second container added to the Pod definition. Why it must be placed in the same Pod is that it needs to use the same resources being used by the main container. Let's have an example to demonstrate the use cases of this pattern.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario: Log-Shipping Sidecar
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lcMcEtlz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d33wubrfki0l68.cloudfront.net/d55c404912a21223392e7d1a5a1741bda283f3df/c0397/images/docs/user-guide/logging/logging-with-sidecar-agent.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lcMcEtlz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d33wubrfki0l68.cloudfront.net/d55c404912a21223392e7d1a5a1741bda283f3df/c0397/images/docs/user-guide/logging/logging-with-sidecar-agent.png" alt="image" width="500" height="250"&gt;&lt;/a&gt;&lt;br&gt;
In this scenario, we have a web server container running the nginx image. The access and error logs produced by the web server are not critical enough to be placed on a Persistent Volume (PV). However, developers need to access to the last 24 hours of logs so they can trace issues and bugs. Therefore we need to ship the access and error logs for the web server to a log-aggregation service. Following the separation of concerns principle, we implement the Sidecar pattern by deploying a second container that ships the error and access logs from nginx. Nginx does one thing, serving the web pages. The second container also specializes in its task; shipping logs. Since containers are running on the same Pod, we can use a shared &lt;code&gt;emptyDir&lt;/code&gt; volume to read and write logs. The definition file for such a Pod may look as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;webserver&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared-logs&lt;/span&gt;
      &lt;span class="na"&gt;emptyDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;

  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
      &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared-logs&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/log/nginx&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sidecar-container&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sh"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;while&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;do&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cat&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;/var/log/nginx/access.log&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;/var/log/nginx/error.log;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;30;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;done"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared-logs&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/log/nginx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above definition is a standard Kubernetes Pod definition except that it deploys two containers to the same Pod. The sidecar container conventionally comes second in the definition so that when you issue the &lt;code&gt;kubectl execute&lt;/code&gt; command, you target the main container by default. The main container is an nginx container that's instructed to store its logs on a volume mounted on &lt;code&gt;/var/log/nginx&lt;/code&gt;. Mounting a volume at that location prevents Nginx from outputting its log data to the standard output and forces it to write them to &lt;code&gt;access.log&lt;/code&gt; and &lt;code&gt;error.log&lt;/code&gt; files.&lt;/p&gt;

&lt;h3&gt;
  
  
  Side Note on Log Aggregation
&lt;/h3&gt;

&lt;p&gt;Notice that the default behaviour of the Nginx image is to store its logs to the standard output to be picked by Dockers' log collector. Docker stores those logs under &lt;code&gt;/var/lib/docker/containers/container-ID/container-ID-json.log&lt;/code&gt; on the host machine. With more than one container (from different Pods) running on the same host and using the same location for storing their logs, your can use a &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/"&gt;DaemonSet&lt;/a&gt; to deploy a log-collector container like Filebat or Logstash to collect those logs and send them to a log-aggregator like &lt;a href="https://www.elastic.co/"&gt;ElasticSearch&lt;/a&gt;. You will need to mount &lt;code&gt;/var/lib/docker/containers&lt;/code&gt; as a &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/"&gt;hostPath volume&lt;/a&gt; to the DaemonSet Pod to give the log-collector container access to the logs.&lt;/p&gt;

&lt;p&gt;The sidecar container runs with the nginx container on the same Pod. This enables the sidecar container to access the same volume as the web server. In the above example, we used the &lt;code&gt;cat&lt;/code&gt; command to simulate sending the log data to a log aggregator every 30 seconds.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Kubernetes Patterns: The Cron Job Pattern</title>
      <dc:creator>ylcnky</dc:creator>
      <pubDate>Sun, 16 Jan 2022 20:42:21 +0000</pubDate>
      <link>https://forem.com/ylcnky/kubernetes-patterns-the-cron-job-pattern-36ko</link>
      <guid>https://forem.com/ylcnky/kubernetes-patterns-the-cron-job-pattern-36ko</guid>
      <description>&lt;h2&gt;
  
  
  Scheduled Job Challenges
&lt;/h2&gt;

&lt;p&gt;Cron jobs were part of the UNIX system since its early version. When GNU and Linux came into existance, crons were already part of the system. A cron job is simply a command, program, or shell script that is scheduled to run periodically. For example, a program that automatically executes log rotation must be from time to time.&lt;/p&gt;

&lt;p&gt;However, as the application grows in scale and high availability is needed, we need our cron jobs to be highly available as well. The following challenges may face this approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If we have multiple hosting for high availability, which node handles the cron?&lt;/li&gt;
&lt;li&gt;What happens if multiple identical cron jobs run simulataneously?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One possible solution to these challenges is to create a higher level "controller" that manages cron jobs. The controller is installed on each node, and a leader node gets elected. The leader node is the only one that can execute cron jobs. If the node is down. another node gets elected. However, you will need install this controller through a third-party vendor or write your own. Fortunately, you can execute periodic tasks by using Kubernetes CronJob controller, which adds a time dimenstion to the traditional Job controller. In this article, we demonstrate the CronJob type, it use case, and the type of problems it solves.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Hol81XOF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.webservertalk.com/wp-content/uploads/cron-jobs-656x410.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Hol81XOF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.webservertalk.com/wp-content/uploads/cron-jobs-656x410.png" alt="image" width="656" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  A Cron Job Example
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;batch/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CronJob&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sender&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*/15&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*"&lt;/span&gt;
 &lt;span class="na"&gt;jobTemplate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bash&lt;/span&gt;
           &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sender&lt;/span&gt;
           &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bash"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'Sending&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;information&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;API/database'"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
         &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;OnFailure&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The purpose of the above definition file is to create a CronJob resource that sends data to an API or a database every fifteen minutes. We used the echo command from the bash Docker image to simulate the sending action to keep the example simple. Let's see the critical properties in this definition.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;.spec.schedule&lt;/code&gt;&lt;/strong&gt;: the schedule parameter defines how frequent the job should run. It uses the same cron format as Linux. If you are not familiar with the cron format, it's straightforward. We have five slots: &lt;strong&gt;minutes, hours, days, months&lt;/strong&gt;, and day of the week. If we want to ignore one of them, we place a start in the slot &lt;strong&gt;(*)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You can also use the &lt;code&gt;*/&lt;/code&gt; notation to denote &lt;strong&gt;every x units&lt;/strong&gt;. In Out example, &lt;strong&gt;&lt;code&gt;*/15&lt;/code&gt;&lt;/strong&gt; means every fifteen minutes, the remaining slots have * so it will run on all hours, all days, all months, and all the days of the week. For more information about the cron format, you can refer to &lt;a href="https://en.wikipedia.org/wiki/Cron"&gt;this documentation&lt;/a&gt;. Like the Job resource, the CronJob uses a Pod template to define the containers that this Pod hosts and the specs of those containers. &lt;strong&gt;&lt;code&gt;.spec.jobTemplate.spec.template.spec.restartPolicy&lt;/code&gt;&lt;/strong&gt; defines whether to restart the job. You can set this value to &lt;code&gt;Never&lt;/code&gt; or &lt;code&gt;OnFailure&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Potential Cases for Cron Jobs
&lt;/h2&gt;

&lt;h4&gt;
  
  
  My Cron Job Didn't Start On Time:
&lt;/h4&gt;

&lt;p&gt;In some cases, the CronJob may not get triggered on the specified time. In such an event, there are two scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We need to execute the job didn't start even if it was delayed.&lt;/li&gt;
&lt;li&gt;We need to execute the job that didn't start only if a specific time limit was not crossed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In our first example, the job sends information to an API that expects this information every fifteen minutes. If the data arrives late, it's useless, and the API automatically discards it. The CronJob resource offers the:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;.spec.startingDeadlineSeconds&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
 parameter. If the job misses the scheduled time and did not exceed that number of seconds, it should get executed. Otherwise it is executed on the next scheduled time. Notice that if this parameter is not set, the CronJob counts all the missed jobs since the last successful execution and reschedules them with a maximum 100 missed job. If the number of missing jobs exceeds 100, the cron job is not rescheduled.&lt;/p&gt;

&lt;p&gt;#### My CronJob is Taking so Long that It Would Span to the Next Execution Time:&lt;/p&gt;

&lt;p&gt;If the CronJob takes too long to finish, you may be in a situation wherer another instance of the job kics in on its scheduled time. The CronJob resource offers the &lt;strong&gt;&lt;code&gt;.spec.concurrencyPolicy&lt;/code&gt;&lt;/strong&gt;. This parameter gives you the following options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;concurrencyPolicy: Always allows concurrent instances of the same CronJob to run. This is the default behavior.&lt;/li&gt;
&lt;li&gt;concurrencyPolicy: Replace if the current job hasn't finished yet, kill it and start the newly scheduled one.&lt;/li&gt;
&lt;li&gt;concurrencyPolicy: Forbid when killing a running job is undesirable, we need to let it complete before starting a new one.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;#### I Need to Execute the CronJob Only Once:&lt;br&gt;
 In Linux, we hav the at command. The at command allows you to reschedule a program to get completed but only once. This functionality can be achieved using the CronJob resource on Kubernetes using the &lt;strong&gt;&lt;code&gt;.spec.suspend&lt;/code&gt;&lt;/strong&gt; parameter. When this parameter is set to &lt;code&gt;True&lt;/code&gt;, it suspends all subsequent CronJob executions. However, be aware that you must also use the &lt;strong&gt;&lt;code&gt;startingDeadlineSeconds&lt;/code&gt;&lt;/strong&gt; with it. The reason is that if you changed the suspend value to &lt;code&gt;False&lt;/code&gt;, Kubernetes examines all the missed jobs taht were not executed because of the suspend parameter being on. If the jobs count is less then 100, they get executed. Using the startingDeadlineSconds setting, you can avoid this behavior as it precents missed jobs from getting executed if the pass the defined number of seconds.&lt;/p&gt;

&lt;p&gt;#### Does Cron Job Keep a History of the Jobs that Succeeded and Failed ?&lt;br&gt;
 Most of the times, you need to know whaat happened when the cron job last ran. If a database update didn't occur, an API server wasn't updated or any other action that was supposed to happen as a result of the CronJob running, you would need to know why. By default, CronJob remembers the last three succeeded jovs and the last failed one. However, those values can be changed to you preference by setting the following parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;.spec.succesfulJobsHistoryLimit&lt;/code&gt;&lt;/strong&gt;: if not set, it defaults to 3. It specifies the number of successful jobs to keep in history.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;.spec.failedJobHistoryLimit&lt;/code&gt;&lt;/strong&gt;: if not set, it defaults to 1. It specifies the number of failed jobs to keep in history.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Kubernetes Capacity Planning</title>
      <dc:creator>ylcnky</dc:creator>
      <pubDate>Sun, 16 Jan 2022 20:29:17 +0000</pubDate>
      <link>https://forem.com/ylcnky/test-post2-4chd</link>
      <guid>https://forem.com/ylcnky/test-post2-4chd</guid>
      <description>&lt;h1&gt;
  
  
  Kubernetes Patterns: Capacity Planning
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d0hMoHTj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/2400/1%2A9bJw8mtWSQ6jxS-G-vliLw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d0hMoHTj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/2400/1%2A9bJw8mtWSQ6jxS-G-vliLw.png" alt="image" width="880" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An excellent cloud-native application design should declare any specific resource that it needs to operate correctly. Kubernetes uses those requirements to make the most efficient decisions to ensure maximum performance and availability of the application. Additionally, knowing the application requirements firsthand allows you to make cost-effective decisions regarding the hardware specifications of the cluster nodes. &lt;/p&gt;

&lt;p&gt;In this post, we will explore the best practices to declare storate, CPU, and memory resources needs. We will also discuss how Kubernetes behaves if you don't specify some of these dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Storage Dependency
&lt;/h2&gt;

&lt;p&gt;Let's explore the most common runtime requirement of an application: &lt;strong&gt;Persisten Storage&lt;/strong&gt;. By default, any modifications made to the filesystem of a running container are lost when the container is restarted. Kubernetes provides two solutions to ensure that changes persist: &lt;code&gt;èmptyDir&lt;/code&gt; and &lt;code&gt;Persistent Volume (PV)&lt;/code&gt;.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Drex9ey2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.itwonderlab.com/wp-content/uploads/2019/06/ansible-kubernetes-vagrant-tutorial-PostgreSQL.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Drex9ey2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.itwonderlab.com/wp-content/uploads/2019/06/ansible-kubernetes-vagrant-tutorial-PostgreSQL.png" alt="image" width="880" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using PV, you can store data that does not get deleted even if the whole Pod was terminated or restarted. There are several methods by which you can provision a backend storage to the cluster. It depends on the environment where the cluster is hosted (on-prem or in cloud-provider). In the following exercise, we use the host's disk as the PV backend storage. Provisioning storate using PVs involves two steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creating the PV: this is the disk on which Pod &lt;strong&gt;claim&lt;/strong&gt; space. This step differs depending on the hosting environent.&lt;/li&gt;
&lt;li&gt;Creating a Persistent Volume Claim (PVC): this is where you actually provision the storage for the Pod by claiming space on the PV.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;First, let's create a PV using the host's local disk. Create the following &lt;code&gt;PV.yml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolume&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hostpath-vol&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
 &lt;span class="na"&gt;capacity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1Gi&lt;/span&gt;
 &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
 &lt;span class="na"&gt;hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/tmp/data"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This definition creates a PV that uses the host disk as the backend storate. The volume is mounted on &lt;code&gt;/tmp/data&lt;/code&gt; directory on the host. We need to crete this directory before applying the configuration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir /tmp/data
$ kubectl apply -f PV.yaml
persistentvolume/hostpath-vol created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we can create a PVC and avail it to our Pod to stora data through a mount point. The following definition file creates both PVC and a Pod that uses it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pvc&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
 &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
 &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100Mi&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pvc-example&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;alpine&lt;/span&gt;
   &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pvc-example&lt;/span&gt;
   &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sh'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-c'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;10000'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
   &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/data"&lt;/span&gt;
       &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-vol&lt;/span&gt;

 &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-vol&lt;/span&gt;
     &lt;span class="na"&gt;persistentVolumeClaim&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;claimName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pvc&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Applyting this definiiton file creates the PVC followed by the Pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pvc_pod.yaml
&lt;span class="go"&gt;persistentvolumeclaim/my-pvc created
pod/pvc-example created
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Any data that gets created or modified on &lt;code&gt;/data&lt;/code&gt; inside the container will be persisted to the host's disk. You can check that by logging into the container, creating a file under &lt;code&gt;/data&lt;/code&gt;, restarting the Pod and then ensuring the file still exists on the Pod. You can also notice that files created in &lt;code&gt;/tmp/data&lt;/code&gt; are immediately available to the Pod and its containers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The &lt;code&gt;hostPort&lt;/code&gt; Dependency
&lt;/h2&gt;

&lt;p&gt;If you are using the &lt;code&gt;hostPort&lt;/code&gt; option, you are explicitly allowing the internal container port to be accessible from outside the host. A Pod that uses &lt;code&gt;hostPort&lt;/code&gt; cannot have more than one replica on the same host because of port conflicts. If no node can provide the required the port, then the Pod using in the `&lt;code&gt;hostPort&lt;/code&gt; option will never get scheduled. Additionally, this creates a one-to-one relationship between the Pod and its hosting node. So, in a cluster with four nodes, you can only have a maximum of four Pods that use the &lt;code&gt;hostPort&lt;/code&gt; option.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration Dependency
&lt;/h2&gt;

&lt;p&gt;Almost all application are designed so that they can be customized through variables. For example, MySQL needs at least the initial root credentials. Kubernetes provides &lt;code&gt;configMaps&lt;/code&gt; for injecting variables to containers inside Pods and Secrets for supplying confidentaial variables like account credentials. Let's have a quick example on how to use &lt;code&gt;configMaps&lt;/code&gt; to provision variables to a Pod:&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;yml&lt;br&gt;
kind: ConfigMap&lt;br&gt;
apiVersion: v1&lt;br&gt;
metadata:&lt;br&gt;
 name: myconfigmap&lt;br&gt;
data:&lt;br&gt;
 # Configuration values can be set as key-value properties&lt;br&gt;
 dbhost: db.example.com&lt;/p&gt;

&lt;h2&gt;
  
  
   dbname: mydb
&lt;/h2&gt;

&lt;p&gt;kind: Pod&lt;br&gt;
apiVersion: v1&lt;br&gt;
metadata:&lt;br&gt;
 name: mypod&lt;br&gt;
spec:&lt;br&gt;
 containers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: mycontainer
 image: nginx
 envFrom:

&lt;ul&gt;
&lt;li&gt;configMapRef:
   name: myconfigmap
&lt;code&gt;&lt;/code&gt;&lt;code&gt;
Now let's apply this configuration and ensure that we can use the environment variables in our container.
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;console
$ kubectl apply -f pod.yml
configmap/myconfigmap created
pod/mypod created
$ kubectl exec -it mypod -- bash
root@mypod:/# echo $dbhost
db.example.com
root@mypod:/# echo $dbname
mydb
&lt;code&gt;&lt;/code&gt;&lt;code&gt;
However, this creates a dependency of its own: if the&lt;/code&gt;configMap` was not available, the container might not work as expected. In our example, if this container and application needs a constant database connection to work, then if it failed to obtain the database name and host, it may not work at all. The same thing holds for Secrets, which must be available firsthand before any client containers can get spawned.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Resource Dependencies
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ngEU11uf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://yqintl.alicdn.com/d6baae6e86c128cb760d227f59f3d1f72ec1964e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ngEU11uf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://yqintl.alicdn.com/d6baae6e86c128cb760d227f59f3d1f72ec1964e.png" alt="image" width="801" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So far we discussed the different runtime dependencies that affect which node will the Pod get scheduled and the various prerequisities that must be availed for the Pod to function correctly. However, you must also take into consideration that capacity requirement of the container.&lt;/p&gt;

&lt;h3&gt;
  
  
  Controllable and Uncontrollable Resources
&lt;/h3&gt;

&lt;p&gt;When designing an application, we need to be aware of the type of resources that this application may consume. Generally, resources can be classified into two main categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sharable&lt;/strong&gt;: those are the resources that can be shared among different consumers and, thus, limited when required. Examples of this are CPU and network bandwidth&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Non-shareable&lt;/strong&gt;: resources that cannot be shared by nature. For example, memory. If a container tries to use more memory than its allocation, it will get killed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Declaring Pod Resource Requirements
&lt;/h3&gt;

&lt;p&gt;The distinction between both resources types is crucial for a good design. Kubernetes allows you to declare the amount of CPU and memory the Pod requires to function. There are two parameters that you can use for this declaration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;requests&lt;/strong&gt;: this is the minimum amount of resources taht the Pod needs. For example, you may already have the knowledge that the hosted application will fail to start if it does not have access to at least 512 MB memory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;limits&lt;/strong&gt;: the limits define the maximum amount of resources that you need to supply for a given Pod.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's have a quick example for a scenario application that needs at least 512 MB and 0.25% of a CPU core to run. The definition file for such a Pod may look like this:&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;yml&lt;br&gt;
kind: Pod&lt;br&gt;
apiVersion: v1&lt;br&gt;
metadata:&lt;br&gt;
 name: mypod&lt;br&gt;
spec:&lt;br&gt;
 containers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: mycontainer
 image: myapp
 resources:
   requests:
     cpu: 250m
     memory: 512Mi
   limits:
     cpu: 500m
     memory: 750Mi
&lt;code&gt;&lt;/code&gt;`
When the scheduler manages to deploy this Pod, it will search for a node that has at least 512MB of memory free. If a suitable node was found, the Pod gets scheduled on it. Otherwise, the Pod will never get deployed. Notice that only the requiest field is considered by the scheduler when determining where to deploy the Pod.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How Are the Resource Requests and Limits Calculated ?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sOKXD6m6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://jaxenter.com/wp-content/uploads/2018/03/container-resource-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sOKXD6m6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://jaxenter.com/wp-content/uploads/2018/03/container-resource-1.png" alt="image" width="711" height="346"&gt;&lt;/a&gt;&lt;br&gt;
Memory is calculated in bytes, but you are allowed to use units like &lt;strong&gt;Mi&lt;/strong&gt; and &lt;strong&gt;Gi&lt;/strong&gt; to specify the requested amount. Notice that you should not specify a memory limit that is higher than the amount of memory on your nodes. If you did, the Pod would never get scheduled. Additionally, since memory is a non-sharable resource as we discussed, if a container tried to request more memory than the limit, it will get killed. Pods that are created through a higher controller like a &lt;code&gt;ReplicaSet&lt;/code&gt; or a &lt;code&gt;Deployment&lt;/code&gt; have their containers restarted automatically when they crash or get terminated. Hence, it is always recommented that you create Pods through a contoller.&lt;/p&gt;

&lt;p&gt;CPU is calculated through millicores. &lt;strong&gt;1 core = 1000 millicores&lt;/strong&gt;. So if you expect your container needs at least half a core to operate, you set the request to 500m. However, since CPU belongs to sharable resources when the container requests more CPU than the limit, it will not get terminated. Rather, the Kubelet throttles the container, which may negatively affect its performance. It is advised here that you use liveness and readiness probes to ensure that you application latency does not affect your business requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Happens When You (not) Specify Requests and Limits ?
&lt;/h3&gt;

&lt;p&gt;Most of the Pod definitions examples ignore the requests and limits parameters. You are not strictly required to include them when designing your cluster. Adding or ignoring requests and limits affects the quality of service that the Pod receives as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lowest Priority Pods&lt;/strong&gt;: when you do not specify requests and limits, the Kubelet will deal with your Pod in a &lt;strong&gt;best-effort&lt;/strong&gt; manner. The Pod, in this case, has the lowest priority. If the node runs our of non-shaerable resources, the best-effort Pods are the first to get killed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medium Priority Pods&lt;/strong&gt;: if you define both parameters and set the requests to be less than the limit, then Kubernetes manages your Pod in the &lt;em&gt;Burstable&lt;/em&gt; manner. When the node runs out of non-sharable resources, the Burstable Pods will get killed only when there are not more best-effort Pods running.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Highest Priority Pods&lt;/strong&gt;: your Pod will be deemed as of the most top priority when you set the requests and the limits to equal values. It's as if you are saying. &lt;em&gt;I need this Pod to consume no less and no more than X memory and Y CPU&lt;/em&gt;. In this case, and in the event of the node running our of shaerable resources, Kubernetes does not terminate those Pods until the best-effort, and the burstable Pods are terminated. Those are the higest priority Pods.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can summarize how the Kubelet deals with Pod priority as follows:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;Priority&lt;/th&gt;
&lt;th&gt;Request&lt;/th&gt;
&lt;th&gt;Limit&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Best-effort&lt;/td&gt;
&lt;td&gt;Lowest&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Burstable&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;Y (higher than X)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Guaranteed&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Pod Priority and Preemption
&lt;/h2&gt;

&lt;p&gt;Sometimes you may need to have more fine-grained control over which of your Pods get evicted first in the event of resources starvation. You can guarantee that a given Pod get evicted last if you set the request and limit to equal values. However, consider a scenario when you have two Pods, one hosting your core applicationa nd another hosting its database. You need those Pods to have the highest priority among other Pods that coexist with them. But you have an additional requirement: you want the application Pods to get evicted berfore the database ones do. Fortunately Kubernetes has a feature that addresses this need: &lt;strong&gt;Pod Priority&lt;/strong&gt; and &lt;strong&gt;preemption&lt;/strong&gt;. So, back to out example scenario, we need two high prority Pods, yet one of them is more important than the other. We start by creating a &lt;code&gt;PriorityClass&lt;/code&gt; than a Pod that uses this &lt;code&gt;PriorityClass&lt;/code&gt;:&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;yml&lt;br&gt;
apiVersion: scheduling.k8s.io/v1&lt;br&gt;
kind: PriorityClass&lt;br&gt;
metadata:&lt;br&gt;
 name: high-priority&lt;/p&gt;

&lt;h2&gt;
  
  
  value: 1000
&lt;/h2&gt;

&lt;p&gt;apiVersion: v1&lt;br&gt;
kind: Pod&lt;br&gt;
metadata:&lt;br&gt;
 name: mypod&lt;br&gt;
spec:&lt;br&gt;
 containers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;image: redis
name: mycontainer
priorityClassName: high-priority
&lt;code&gt;&lt;/code&gt;`
The definition file creates two objects: the PriorityClass and a Pod.
## How Pods Get Scheduled Given Their PriorityClass Value ?
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OGxT_1Hh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/411/1%2Aq37yQwjtards3pSeNUP-fQ.png" alt="image" width="411" height="339"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When we have multiple Pods with Different PriorityClass values, the admission contoller starts by sorting Pods according to their priority. Highest priority Pods (those having the highest PriorityClass numbers) get scheduled first as long as no other constraints are preventing their scheduling.&lt;/p&gt;

&lt;p&gt;Now, what happens if there are no nodes with available resources to schedule a high priority Pod? The scheduler will evict (preempt) lower priority Pods from the node to give enough room for the higher priority ones. The scheduler will continue lower-priority Pods until there is enough room to accommodate the more upper Pods. This feature helps you when you design the cluster so that you ensure that the highest priority Pods (ex. the core application and database) are never evicted unless no other option is possible. At the same time, they also get scheduled first.&lt;/p&gt;

&lt;h2&gt;
  
  
  Things to Consider in your Design When Using QoS and Pod Priority
&lt;/h2&gt;

&lt;p&gt;You may be asking what happens when you use resources and limits (QoS) combined with the PriorityClass parameter. Do they overlap or override each other? Followings can be essential things to note when influencing the schedule decisions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Kubelet uses QoS to control and manage the node's limited resources among the Pods. QoS eviction happens only when the node starts to run out of shareable resources. The Kubelet considers QoS before considering Preemption priorities.&lt;/li&gt;
&lt;li&gt;The scheduler considers the PriorityClass of the Pod before the QoS. It does not attempt to evict Pods unless higher-priority Pods need to be scheduled and the node does not have enough room for them.&lt;/li&gt;
&lt;li&gt;When the scheduler decides to preempt lower-priority pods, it attempts a clean and respects the grace period. However, it does not honor &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/"&gt;PodDistruptionBudget&lt;/a&gt;, which may lead to distupting the cluster quorum of several low priority Pods.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Kubernetes Patterns: The Adapter Pattern</title>
      <dc:creator>ylcnky</dc:creator>
      <pubDate>Sun, 16 Jan 2022 20:15:45 +0000</pubDate>
      <link>https://forem.com/ylcnky/kubernetes-patterns-the-adapter-pattern-ihd</link>
      <guid>https://forem.com/ylcnky/kubernetes-patterns-the-adapter-pattern-ihd</guid>
      <description>&lt;h1&gt;
  
  
  Kubernetes Patterns: The Adapter Pattern
&lt;/h1&gt;

&lt;p&gt;All containerized applications are able to communicate with each other through a well-defined protocol, typically HTTP. Each application has a set of endpoints that expect an HTTP verb to do a specific action. It is the responsibility of the client determine how to communicate with the server application. However, you could have a service that expects a specific response from any application. The most common example of this service type is &lt;a href="https://en.wikipedia.org/wiki/Prometheus"&gt;Prometheus&lt;/a&gt;. Prometheus is a very well-known monitoring application that checks not only if an application is working, but also if it is working as expected or perfectly.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9Sgpth6t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/2400/1%2APouqtAJaGIcwCcgLkO54tA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9Sgpth6t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/2400/1%2APouqtAJaGIcwCcgLkO54tA.png" alt="image" width="880" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prometheus works by querying and endpoint exposed by the target application. The endpoint must return the diagnostic data in a format that Prometheus expects. A possible solution is to configure each application to output its health data in a Prometheus-friendly way. However, you may need to switch your monitoring solution to another tool that expects another format. Changing the application code each time you need a health-status format is largely ineficient. Following the Adapter Pattern, we can have a sidecar container in the same Pod as the app's container. The only purpose of the sidecar (the adapter container) is to "translate" the output from the application's endpoint to a format that Prometheus (or the client tool) accepts and understands.&lt;/p&gt;
&lt;h2&gt;
  
  
  Scenario: Using an Adapter Container with Nginx
&lt;/h2&gt;

&lt;p&gt;Nginx has an endpoint that is used for querying the web server's status. In this scenario, we add an adapter container to transform this endpoint's output to the required format for Prometheus. First, we need to enable this endpoint on Nginx. To do this, we need to make a change to the &lt;code&gt;default.conf&lt;/code&gt; file. The following configMap contains the required &lt;code&gt;default.conf&lt;/code&gt; file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-conf&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;default.conf&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;server {&lt;/span&gt;
      &lt;span class="s"&gt;listen       80;&lt;/span&gt;
      &lt;span class="s"&gt;server_name  localhost;&lt;/span&gt;
      &lt;span class="s"&gt;location / {&lt;/span&gt;
          &lt;span class="s"&gt;root   /usr/share/nginx/html;&lt;/span&gt;
          &lt;span class="s"&gt;index  index.html index.htm;&lt;/span&gt;
      &lt;span class="s"&gt;}&lt;/span&gt;
      &lt;span class="s"&gt;error_page   500 502 503 504  /50x.html;&lt;/span&gt;
      &lt;span class="s"&gt;location = /50x.html {&lt;/span&gt;
          &lt;span class="s"&gt;root   /usr/share/nginx/html;&lt;/span&gt;
      &lt;span class="s"&gt;}&lt;/span&gt;
      &lt;span class="s"&gt;location /nginx_status {&lt;/span&gt;
        &lt;span class="s"&gt;stub_status;&lt;/span&gt;
        &lt;span class="s"&gt;allow 127.0.0.1;  #only allow requests from localhost&lt;/span&gt;
        &lt;span class="s"&gt;deny all;   #deny all other hosts&lt;/span&gt;
      &lt;span class="s"&gt;}&lt;/span&gt;
    &lt;span class="s"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the default &lt;code&gt;default.conf&lt;/code&gt; file that ships the nginx Docker image. We define an endpoint &lt;code&gt;/nginx_status&lt;/code&gt; that makes use of the stub_status module to display nginx's diagnostic information. Next, let's create the Nginx Pod and the adapter container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;webserver&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-conf&lt;/span&gt;
    &lt;span class="na"&gt;configMap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-conf&lt;/span&gt;
      &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default.conf&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default.conf&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;webserver&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/nginx/conf.d&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-conf&lt;/span&gt;
      &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;adapter&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx/nginx-prometheus-exporter:0.4.2&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-nginx.scrape-uri"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost/nginx_status"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9113&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Pod definition contains two containers; the nginx container, which acts as the application container, and the adapter container. The adapter container uses the &lt;code&gt;nginx/nginx-prometheus-exporter&lt;/code&gt; which does the magic of transforming the metrics that Nginx exposes on &lt;code&gt;/nginx_status&lt;/code&gt; following the Prometheus format. If you are interested in seeing the difference between both metrics, do the following&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; webserver bash
&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;root@webserver:/# apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt &lt;span class="nb"&gt;install &lt;/span&gt;curl &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;span class="go"&gt;Defaulting container name to webserver.
Use 'kubectl describe pod/webserver -n default' to see all of the containers in this pod.
&lt;/span&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;root@webserver:/# curl localhost/nginx_status
&lt;span class="go"&gt;Active connections: 1
server accepts handled requests
 3 3 3
 Reading: 0 Writing: 1 Waiting: 0
&lt;/span&gt;&lt;span class="gp"&gt; $&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;root@webserver:/# curl localhost:9313/metrics
&lt;span class="go"&gt; curl: (7) Failed to connect to localhost port 9313: Connection refused
&lt;/span&gt;&lt;span class="gp"&gt;root@webserver:/#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;curl localhost:9113/metrics
&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;HELP nginx_connections_accepted Accepted client connections
&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;TYPE nginx_connections_accepted counter
&lt;span class="go"&gt;nginx_connections_accepted 4
&lt;/span&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;HELP nginx_connections_active Active client connections
&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;TYPE nginx_connections_active gauge
&lt;span class="go"&gt;nginx_connections_active 1
&lt;/span&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;HELP nginx_connections_handled Handled client connections
&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;TYPE nginx_connections_handled counter
&lt;span class="go"&gt;nginx_connections_handled 4
&lt;/span&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;HELP nginx_connections_reading Connections where NGINX is reading the request header
&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;TYPE nginx_connections_reading gauge
&lt;span class="go"&gt;nginx_connections_reading 0
&lt;/span&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;HELP nginx_connections_waiting Idle client connections
&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;TYPE nginx_connections_waiting gauge
&lt;span class="go"&gt;nginx_connections_waiting 0
&lt;/span&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;HELP nginx_connections_writing Connections where NGINX is writing the response back to the client
&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;TYPE nginx_connections_writing gauge
&lt;span class="go"&gt;nginx_connections_writing 1
&lt;/span&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;HELP nginx_http_requests_total Total http requests
&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;TYPE nginx_http_requests_total counter
&lt;span class="go"&gt;nginx_http_requests_total 4
&lt;/span&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;HELP nginx_up Status of the last metric scrape
&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;TYPE nginx_up gauge
&lt;span class="go"&gt;nginx_up 1
&lt;/span&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;HELP nginxexporter_build_info Exporter build information
&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;TYPE nginxexporter_build_info gauge
&lt;span class="go"&gt;nginxexporter_build_info{gitCommit="f017367",version="0.4.2"} 1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So we logged into the webserver Pod, installed curl to be able to establish HTTP requests, and examined the &lt;code&gt;/nginx_status&lt;/code&gt; endpoint and the exporter's one (located under: 9113/metrics). Notice that in both requests, we used localhost as the server address. That's because both containers are running in the same Pod and using the same loopback address.&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
