<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Levent Ogut</title>
    <description>The latest articles on Forem by Levent Ogut (@leventogut).</description>
    <link>https://forem.com/leventogut</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/leventogut"/>
    <language>en</language>
    <item>
      <title>Self-Service Kubernetes Namespaces Are A Game-Changer by Daniel Thiry</title>
      <dc:creator>Levent Ogut</dc:creator>
      <pubDate>Tue, 02 Mar 2021 08:13:01 +0000</pubDate>
      <link>https://forem.com/loft/self-service-kubernetes-namespaces-are-a-game-changer-by-daniel-thiry-43a7</link>
      <guid>https://forem.com/loft/self-service-kubernetes-namespaces-are-a-game-changer-by-daniel-thiry-43a7</guid>
      <description>&lt;p&gt;Many companies have adopted Kubernetes recently. However, most of them still do not realize its full potential because the actual Kubernetes usage in these organizations is very limited. Since Kubernetes has evolved dramatically, it is now &lt;a href="https://loft.sh/blog/is-kubernetes-still-just-an-ops-topic/" rel="noopener noreferrer"&gt;not only a technology for operations anymore&lt;/a&gt; but also non-ops engineers can work with it. For this, &lt;a href="https://loft.sh/blog/why-adopting-kubernetes-is-not-the-solution/" rel="noopener noreferrer"&gt;Kubernetes adoption should not end here&lt;/a&gt;, it rather just starts.&lt;/p&gt;

&lt;p&gt;So, it now often makes sense to also include engineers in the Kubernetes adoption process and, as the latest &lt;a href="https://insights.stackoverflow.com/survey/2020#technology-most-loved-dreaded-and-wanted-platforms-wanted5" rel="noopener noreferrer"&gt;Stack Overflow Developer Developer Survey&lt;/a&gt; shows, engineers, appreciate it as they both want to work with Kubernetes if they are currently not using it and also like working with after they have started.&lt;/p&gt;

&lt;p&gt;An easy way to have more developers start working with Kubernetes is to provide them with self-service namespaces. In this article, I will describe what self-service namespaces are, why they are a game-changer for Kubernetes adoption, and how to get them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are self-service Kubernetes namespaces?
&lt;/h2&gt;

&lt;p&gt;Self-service namespaces are Kubernetes namespaces that can be created by the users on-demand without the need to be an admin of the cluster the namespaces are running on. As such, self-service namespaces are running on a shared Kubernetes cluster and are created in a simple and standardized way by their users, e.g. via a UI of a &lt;a href="https://loft.sh/features/self-service-kubernetes-namespaces" rel="noopener noreferrer"&gt;self-service namespace platform&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Self-service namespaces so provide engineers an easy and always available access to Kubernetes, which is a huge advantage compared to possible alternatives: While local Kubernetes solutions such as &lt;a href="https://github.com/kubernetes/minikube" rel="noopener noreferrer"&gt;minikube&lt;/a&gt; always have to be set up and configured by the engineers themselves and so are never readily available, giving each the developer an own cluster in the cloud is very expensive. Individual clusters ars also often unfeasible due to restricted cloud access rights and unnecessary because simple namespaces are enough for most standard use cases. &lt;/p&gt;

&lt;p&gt;Providing namespaces in a self-service fashion compared to letting admins create them manually is, therefore, a decisive feature as only this eliminates the &lt;a href="https://tanzu.s3.us-east-2.amazonaws.com/campaigns/pdfs/VMware_State_Of_Kubernetes_2020_eBook.pdf" rel="noopener noreferrer"&gt;most important dev productivity impediment of “waiting for central IT to provide access to infrastructure”&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Overall, self-service namespaces are therefore the easiest way of providing engineers with readily available Kubernetes access.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of self-service namespaces
&lt;/h2&gt;

&lt;p&gt;Providing a self-service namespace solution to users has advantages for both sides, the users (engineers) themselves and the admins.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits for namespace users:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Velocity:&lt;/strong&gt; Self-service namespaces are always available and can be created fast and easily whenever they are needed by the users. This makes them a very useful solution for a variety of engineering tasks, ranging from &lt;a href="https://loft.sh/use-cases/cloud-native-development" rel="noopener noreferrer"&gt;cloud-native development&lt;/a&gt;, to &lt;a href="https://loft.sh/use-cases/ci-cd-pipelines" rel="noopener noreferrer"&gt;CI/CD pipelines&lt;/a&gt; and &lt;a href="https://loft.sh/use-cases/ai-machine-learning-experiments" rel="noopener noreferrer"&gt;AI/ML experiments&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Independence:&lt;/strong&gt; The self-service aspect enables engineers to work independently from admins as they do not have to wait for the admins to create a work environment before they can start.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Easier Experimentation:&lt;/strong&gt; This independence of the users also makes it possible to experiment more with namespaces as the namespaces can now be thrown away and recreated by the users themselves. The users so do not have to fear breaking something and can eventually treat namespaces as “cattle” and not as “pets”. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The independence of users can be further enhanced and the fear of breaking can be reduced by using self-service &lt;a href="https://dev.to/blog/introduction-into-virtual-clusters-in-kubernetes/"&gt;virtual Clusters (vClusters)&lt;/a&gt;, which are very similar to namespaces but provide harder isolation and give engineers even more freedom to configure Kubernetes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Benefits for cluster admins:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Better Stability:&lt;/strong&gt; Since all namespaces are created in the same standardized way by the users, there is little room for human error in the whole namespace creation process, which improves the stability of the underlying Kubernetes cluster. Additionally, the users are encapsulated in namespaces, which prevents that them interfere with each other.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Less Effort and Pressure:&lt;/strong&gt; The gained independence by the users reduces the pressure on the cluster admins. They do not have to be always available to provide work environments for the engineers and are so no longer a bottleneck for the &lt;a href="https://dev.to/blog/kubernetes-development-workflow-3-critical-steps/"&gt;engineering workflows with Kubernetes&lt;/a&gt;. Admins only have to set up the self-service platform in the first place and then ensure that it is available and that the underlying cluster is running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Focus on Stability and Security:&lt;/strong&gt; As the admins are not needed in the creation process of every namespace anymore, they can now focus more on the stability and security of the underlying cluster.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Providing &lt;a href="https://loft.sh/features/virtual-kubernetes-clusters" rel="noopener noreferrer"&gt;self-service virtual Clusters&lt;/a&gt; can again improve the system, as vClusters provide an even stronger form of multi-tenancy and user isolation. They also allow the users to configure even more themselves in their vCluster so that the underlying host cluster can be very rudimentary, which provides less attack surface and room for human error further improving stability and security. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How to get self-service Kubernetes namespaces
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Underlying Cluster
&lt;/h3&gt;

&lt;p&gt;The first part you need for a self-service namespace system is an underlying Kubernetes cluster that the namespaces are supposed to run on. If the self-service namespaces will be used for development and testing processes, it makes sense to create a new cluster that is separate from the cluster you run production workloads on.&lt;/p&gt;

&lt;p&gt;Since one of the benefits of a self-service namespace solution is that it can be used and shared by many users, the cluster needs to be a cloud-based cluster and cannot run locally (even though you may test your setup with a local cluster first and then start again with a “real” version in the cloud).&lt;/p&gt;

&lt;p&gt;Here, it does not matter if it is a cluster running in a public cloud or private cloud and if it is self-managed or managed by the cloud provider. However, it often makes sense to use a cluster that is similar to your production cluster (e.g. use AWS if your production cluster is AWS) because this makes development, testing, and other processes you want to use the self-service namespaces for more realistic.&lt;/p&gt;

&lt;h3&gt;
  
  
  User Management
&lt;/h3&gt;

&lt;p&gt;A second central component for a self-service namespace solution is permission and user management. This allows the admins to keep control of who is allowed to create namespaces and to overview who is using what.&lt;/p&gt;

&lt;p&gt;Especially in larger teams, having a Single-Sign-On solution is helpful because admins do not have to manually add the users and the users can start immediately. If you build a self-service namespace system yourself, solutions such as &lt;a href="https://github.com/dexidp/dex" rel="noopener noreferrer"&gt;dex&lt;/a&gt; may be helpful for this task.&lt;/p&gt;

&lt;h3&gt;
  
  
  User Limits
&lt;/h3&gt;

&lt;p&gt;While you want to enable the users to create namespaces on-demand, you also want to prevent excessive usage in terms of CPU, memory, and potentially other factors such as number of containers, services, or ingresses. Such a limitation is very helpful to control cost, but you need to be careful not to limit the users in their work. For this, it should be up to the users how they want to allocate their allowed resources.&lt;/p&gt;

&lt;p&gt;Implementing efficient user limits is much easier with manually provisioned and statically assigned namespaces than with dynamic namespaces that are created by the users on-demand. This is due to the fact that Kubernetes limits in Resource Quotas are on a namespaces-basis and not on a user basis.&lt;/p&gt;

&lt;p&gt;However, you want to limit users and not namespaces, so you need to solve this problem to get sensible user limits. For this, you need to use aggregated resource quotas, which can be done with the open-source solution &lt;a href="https://github.com/kiosk-sh/kiosk" rel="noopener noreferrer"&gt;kiosk&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Make vs. buy
&lt;/h3&gt;

&lt;p&gt;Now that you know the most essential components for a self-service namespace system, you need to decide if you want to build this system yourself or just buy an existing off-the-shelf solution.&lt;/p&gt;

&lt;p&gt;Several large organizations have already built &lt;a href="https://dev.to/blog/building-an-internal-kubernetes-platform/"&gt;an internal Kubernetes platform&lt;/a&gt; for namespaces. A very good example of this is Spotify because there even was a public talk at &lt;a href="https://www.youtube.com/watch?v=vLrxOhZ6Wrg" rel="noopener noreferrer"&gt;KubeCon North America 2019&lt;/a&gt; about their platform, so you can learn from their experience. However, even when using some open-source components such as &lt;a href="https://github.com/dexidp/dex" rel="noopener noreferrer"&gt;dex&lt;/a&gt; or &lt;a href="https://github.com/kiosk-sh/kiosk" rel="noopener noreferrer"&gt;kiosk&lt;/a&gt;, building an own namespace self-service platform takes a lot of effort, which is probably the reason why mainly larger organizations or companies with very special needs go this way.&lt;/p&gt;

&lt;p&gt;In contrast to this, buying an existing off-the-shelf solution is feasible for organizations of any size and has the advantage that you can get started very fast without a large upfront investment. Additionally, you get a specialized service that goes beyond the minimal needs that you would probably build on your own. One example of such a ready-to-use solution is &lt;a href="https://loft.sh/" rel="noopener noreferrer"&gt;loft&lt;/a&gt;. Loft is internally building on Kiosk and, besides self-service namespaces on top of any connected cluster, it provides some useful additional features: It works with multiple clusters, has a GUI, a CLI, as well as a &lt;a href="https://loft.sh/features/kubernetes-cost-management" rel="noopener noreferrer"&gt;sleep mode&lt;/a&gt; to save cost, and it provides a virtual cluster technology that can be used to create self-service Kubernetes work environments that are even better isolated than namespaces.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If you enable your engineers to create namespaces independently and on-demand, this will change how Kubernetes is used in your organization. Especially if you have already adopted Kubernetes and now want to spread its usage among further people in your organization, a self-service namespace system is a very good solution. It answers the fundamental question of how to provide easy and independent Kubernetes access to engineers, while it is still also admin-friendly because admins can easily manage it and so have more time to care for the underlying cluster’s stability.&lt;/p&gt;

&lt;p&gt;To get a self-service namespace system, you need to decide if you want to make or buy it. Making it is the right solution for companies with very special needs, but even then, you can build upon already existing open-source components that will make your life much easier. For most companies, buying is still a more practical approach because you get a full solution from a specialized vendor without a huge upfront investment.&lt;/p&gt;

&lt;p&gt;No matter how you decide, having a self-service namespace platform will help you to take the next step towards more effective use of Kubernetes at your organization.&lt;/p&gt;



&lt;p&gt;Photo by &lt;a href="https://www.pexels.com/@thiagopatrevita?utm_content=attributionCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=pexels" rel="noopener noreferrer"&gt;Thiago Patrevita&lt;/a&gt; from &lt;a href="https://www.pexels.com/photo/four-man-cooking-1121482/?utm_content=attributionCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=pexels" rel="noopener noreferrer"&gt;Pexels&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>programming</category>
    </item>
    <item>
      <title>Kubernetes Development Environments - A Comparison by Daniel Thiry</title>
      <dc:creator>Levent Ogut</dc:creator>
      <pubDate>Wed, 24 Feb 2021 10:15:03 +0000</pubDate>
      <link>https://forem.com/leventogut/kubernetes-development-environments-a-comparison-by-daniel-thiry-4oip</link>
      <guid>https://forem.com/leventogut/kubernetes-development-environments-a-comparison-by-daniel-thiry-4oip</guid>
      <description>&lt;p&gt;Kubernetes has &lt;a href="https://loft.sh/blog/is-kubernetes-still-just-an-ops-topic/"&gt;left the state when it was mostly an ops technology behind&lt;/a&gt; and now is also very relevant for many developers. As I wrote in my blog post about &lt;a href="https://loft.sh/blog/kubernetes-development-workflow-3-critical-steps/"&gt;the Kubernetes workflow&lt;/a&gt;, the first step for every developer who starts to directly work with Kubernetes is to set up/get access to a Kubernetes development environment.&lt;/p&gt;

&lt;p&gt;A Kubernetes work environment is not only the first step but also a basic requirement to be able to work with Kubernetes at all. Still, access to such an environment is often a problem: A &lt;a href="https://tanzu.s3.us-east-2.amazonaws.com/campaigns/pdfs/VMware_State_Of_Kubernetes_2020_eBook.pdf"&gt;VMware study&lt;/a&gt; even found out that “access to infrastructure is the biggest impediment to developer productivity”. For this, Kubernetes development environments should have a high priority for every team that plans to use the technology.&lt;/p&gt;

&lt;p&gt;In this article, I will describe and compare four different Kubernetes development environments and explain when to use which dev environment.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Local Kubernetes Clusters&lt;/li&gt;
&lt;li&gt;Individual Cloud-Based Clusters&lt;/li&gt;
&lt;li&gt;Self-Service Namespaces&lt;/li&gt;
&lt;li&gt;Self-Service Virtual Clusters&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  6 Evaluation Criteria For Dev Environments
&lt;/h2&gt;

&lt;p&gt;To make the different Kubernetes dev environments comparable, it makes sense to first define the evaluation criteria used. I will rate every environment using the following criteria:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developer Experience:&lt;/strong&gt; How easy is it for developers to get started with and to use the environment? This includes factors such as the speed of setup, the ease of use, and the required knowledge by the developers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Admin Experience:&lt;/strong&gt; How easy is it for admins to manage the environments and to manage the system? Here, I will consider the complexity of the system, the effort to manage it, and to add additional users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flexibility/Realism:&lt;/strong&gt; How realistic is the dev environment compared to the production environment and how flexible is it for different use cases? A good development environment should be very similar to the production environment to avoid “it works on my machine”-problems and it should also be freely configurable and useable for many different use cases (e.g. coding, testing,…).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability:&lt;/strong&gt; How scalable is the environment itself and how scalable is the approach if many users are using the system? Especially for complex applications, a lot of computing resources are needed, so the dev environment should be able to provide them. Additionally, the general approach to providing this kind of environment to developers should be feasible also for large teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Isolation/Stability:&lt;/strong&gt; How are users isolated from each other and how vulnerable is the system? Developers should be able to work in parallel without interfering with each other and the system they use should be stable and secure to reduce inefficient outages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost:&lt;/strong&gt; How expensive is this approach? This category should be quite self-explanatory but still is an important factor when choosing the right development environment for your team.&lt;/p&gt;

&lt;p&gt;Now that the evaluation criteria are clear, we can start with the comparison of the Kubernetes development environments:&lt;/p&gt;

&lt;p&gt;&lt;a href="/blog/images/content/kubernetes-dev-environments-comparison-table.png" class="article-body-image-wrapper"&gt;&lt;img src="/blog/images/content/kubernetes-dev-environments-comparison-table.png" alt="" title="Comparison of the different Kubernetes development environments"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Local Kubernetes Clusters
&lt;/h2&gt;

&lt;p&gt;Local Kubernetes clusters are clusters that are running on the individual computer of the developer. There are many tools that provide such an environment, such as &lt;a href="https://github.com/kubernetes/minikube"&gt;Minikube&lt;/a&gt;, &lt;a href="https://github.com/ubuntu/microk8s"&gt;microk8s&lt;/a&gt;, &lt;a href="https://github.com/rancher/k3s"&gt;k3s&lt;/a&gt;, or &lt;a href="https://github.com/kubernetes-sigs/kind"&gt;kind&lt;/a&gt;. While they are not all the same, their use as a development environment is quite comparable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Developer Experience: -
&lt;/h3&gt;

&lt;p&gt;Local development environments need to be set up by the developers themselves as they run on their computers. This can be quite challenging, especially as the local setup is always slightly different (different hardware, different operating systems, different configurations, etc.) which makes it harder to provide a very simple setup guide. After the setup is completed, the developers are also responsible to care and manage their environments themselves, which they are often not used to if they do not have previous Kubernetes experience.&lt;/p&gt;

&lt;p&gt;Therefore, the general developer experience is relatively bad for developers (at least without Kubernetes knowledge).&lt;/p&gt;

&lt;h3&gt;
  
  
  Admin Experience: o
&lt;/h3&gt;

&lt;p&gt;Admins are not involved in the setup and the management of local Kubernetes clusters. That means that they have no effort here. However, they also do not know if the developers are able to work with their clusters and are generally excluded from the setup and management of the clusters. Still, the admins probably have to support the developers in case of problems and questions.&lt;/p&gt;

&lt;p&gt;Overall, the admin experience is mediocre because the admins do not face their typical challenges but rather have to educate and support the developers individually.&lt;/p&gt;

&lt;h3&gt;
  
  
  Flexibility/Realism: o
&lt;/h3&gt;

&lt;p&gt;On the one hand, local clusters are always somewhat different from “real” clusters in a cloud environment. They are often pared-down Kubernetes versions that lack some features which cannot be replicated locally (and are often not needed locally). Exemplarily, this can be seen in the name “k3s”, which is an allusion to the original Kubernetes’ “k8s”. On the other hand, the engineers are able to do whatever they want with their local cluster, so they can also flexibly configure it.&lt;/p&gt;

&lt;p&gt;In sum, local clusters score high in terms of flexible configuration but low on realism as they do not have all Kubernetes features and so cannot be used for any use case.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scalability: - -
&lt;/h3&gt;

&lt;p&gt;Since local clusters can only access the computing resources that are available on the engineer’s computer, they reach their limit for complex applications relatively fast. Also, the approach to let engineers create their local clusters themselves is not really scalable as the same process has to be repeated for every engineer with little options for automation.&lt;/p&gt;

&lt;p&gt;Scalability is thus a clear weakness of local Kubernetes clusters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Isolation/Stability: ++
&lt;/h3&gt;

&lt;p&gt;Every developer has a separate environment that is completely disconnected from any other environment. In theory, they can even be used without an internet connection. For this, the isolation of local clusters is perfect. This disconnection also ensures that only the individual environment can fail and never all environments at the same time, which minimizes the vulnerability of this approach to provide developers with a Kubernetes environment.&lt;/p&gt;

&lt;p&gt;Isolation and security are definitely a strength of local clusters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost: ++
&lt;/h3&gt;

&lt;p&gt;Local Kubernetes clusters do not require sometimes costly cloud computing resources but only use the locally available computing resources. The different local Kubernetes solutions are all open-source and free to use.&lt;/p&gt;

&lt;p&gt;Using the local Kubernetes cluster for development does not have any direct cost, so it is the cheapest solution possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Individual Cloud-Based Clusters
&lt;/h2&gt;

&lt;p&gt;Individual clusters running in the cloud are the second type of Kubernetes dev environment. They can either be created by the admins who then give an individual access to the developers or the developers are enabled to create them themselves if they have their own account for the cloud provider.&lt;/p&gt;

&lt;h3&gt;
  
  
  Developer Experience: o
&lt;/h3&gt;

&lt;p&gt;The developer experience can be very different and depends on the way the individual clusters are created: If developers have direct access to the cloud, e.g. with an elaborated Identity and Access Management (IAM), they can create their work environment on-demand and the setup is quite easy (especially in public clouds) as it is always the same. Still, they must do this themselves and might need some help for the management of the cluster.&lt;/p&gt;

&lt;p&gt;If admins create the clusters and distribute the access to the developers, the dev experience can become quite bad. While the management of the cluster is now cared for, the admins become a bottleneck. Here, you will face the previously mentioned problem of waiting for central IT to provide the dev environments.&lt;/p&gt;

&lt;p&gt;Overall, in the best case, the dev experience is sufficient if developers have direct cloud access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Admin Experience: - -
&lt;/h3&gt;

&lt;p&gt;No matter in which way the developers get their cluster, the admin experience is always quite bad. If every developer has an own cloud account, the admins will have a hard time getting an overview of the whole system (What is still used? Who is using what?). In this case, they also have to support the developers in managing the clusters. Since the number of clusters grows proportionally with the number of engineers, the effort also grows with the team size.&lt;/p&gt;

&lt;p&gt;In the case of a central creation and distribution of the clusters by the admins, the administrators will also have a lot of effort. They will have to answer all requests by developers for clusters and configuration changes and have to be always available for them because they are critical for the developers’ performance. In general, many clusters lead to more management effort for admins.&lt;/p&gt;

&lt;p&gt;The individual cloud-based cluster approach is a bad solution from the admin’s perspective and necessarily leads to a lot of work on their side that can even become impossible for them to handle.&lt;/p&gt;

&lt;h3&gt;
  
  
  Flexibility/Realism: ++
&lt;/h3&gt;

&lt;p&gt;Since the production systems usually also run in Kubernetes in the cloud, having such an environment for development is perfectly realistic. The individual environments can also be freely configured, so they exactly match the needs of the developers or are identical to the production system’s settings.&lt;/p&gt;

&lt;p&gt;Individual cloud-based clusters are the best solution to get a highly realistic development environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scalability: o
&lt;/h3&gt;

&lt;p&gt;In terms of scalability, it is important that the clusters are running in a cloud environment, which allows you to scale them up almost infinitely. Still, the scalability criterion also includes the scalability of the general approach for larger teams and here, individual clusters can reach a limit as the admin effort grows with the team size.&lt;/p&gt;

&lt;p&gt;Scalability in terms of computing resources is not a problem for individual clusters in the cloud but rolling out such a system in larger organizations will often be infeasible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Isolation/Stability: +
&lt;/h3&gt;

&lt;p&gt;Having isolation of developers on a cluster level is very secure. If you are using a public cloud, the isolation of developers is almost the same as the isolation of different companies, which of course is a high priority for the cloud providers.&lt;/p&gt;

&lt;p&gt;100% stability and isolation will probably never be reached in the cloud, but they are as good as possible with individual clusters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost: - -
&lt;/h3&gt;

&lt;p&gt;Running many clusters is very expensive. This is due to several factors: At first, you will have a lot of redundancy because every cluster will have its own control plane. Secondly, having oversized or unused clusters is almost inevitable with this approach as either developers are responsible for right-sizing and shutting down clusters or admins have to do it centrally but they do not have the oversight and knowledge of what is still used. &lt;/p&gt;

&lt;p&gt;Additionally, dev environments are also only used if developers are working, so many clusters will probably run idle at night, during holidays, and weekends. Finally, public cloud providers charge a cluster management fee that needs to be paid for every cluster, i.e. for every developer in this case.&lt;/p&gt;

&lt;p&gt;Individual clusters for every engineer in the cloud are a very expensive approach to provide Kubernetes development environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Self-Service Namespaces
&lt;/h2&gt;

&lt;p&gt;Instead of giving every developer a whole cluster, it is also possible to just give them Kubernetes namespaces. Again, these can either be created centrally by the admins or developers are provided with a tool to create &lt;a href="https://loft.sh/blog/self-service-kubernetes-namespaces-are-a-game-changer/"&gt;self-service namespaces&lt;/a&gt; on-demand. Providing them centrally comes with many disadvantages I already managed for individual clusters, so I will focus on the self-service namespace approach here.&lt;/p&gt;

&lt;h3&gt;
  
  
  Developer Experience: +
&lt;/h3&gt;

&lt;p&gt;As engineers can create the namespaces themselves, they are independent of the admins and never have to wait to get a Kubernetes development environment. At the same time, the namespaces are running on a cluster that is managed by admins, so the developers do not have to care for the management of the environment. Namespaces as constructs within clusters will often be enough for simpler development work, so developers will be able to do most standard tasks and are only limited in some situations, e.g. when they need CRDs or want to install Helm charts that use RBAC.&lt;/p&gt;

&lt;p&gt;Therefore, the developer experience with self-service namespaces is very good for “standard” development tasks and developers without special Kubernetes configuration requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Admin Experience: +
&lt;/h3&gt;

&lt;p&gt;Admins need to &lt;a href="https://loft.sh/blog/building-an-internal-kubernetes-platform/"&gt;set up an internal, self-service Kubernetes platform&lt;/a&gt; once, which may take some time if they want to build it from scratch, which companies such as &lt;a href="https://www.youtube.com/watch?v=vLrxOhZ6Wrg"&gt;Spotify&lt;/a&gt; did. Alternatively, it is also possible to buy solutions that add this self-service namespace feature to any cluster, such as &lt;a href="https://loft.sh/features/self-service-kubernetes-namespaces"&gt;Loft&lt;/a&gt;. In any case, the admins can focus on other tasks such as the security and stability of the underlying cluster once the system is properly set up. Additionally, it is relatively easy to get an overview of the whole system as everything is running in just one cluster.&lt;/p&gt;

&lt;p&gt;Self-service namespaces are an admin-friendly solution that requires some initial setup effort.&lt;/p&gt;

&lt;h3&gt;
  
  
  Flexibility/Realism: -
&lt;/h3&gt;

&lt;p&gt;Since namespaces are running on a shared Kubernetes cluster, it is not possible to configure everything individually by the developers. For example, all engineers have to use the same Kubernetes version and cannot modify cluster-wide resources. Still, namespaces are running in a cloud environment that resembles the production environment, which at least makes namespaces a relatively realistic work environment.&lt;/p&gt;

&lt;p&gt;Overall, namespaces may restrict the flexibility of developers in some situations but are generally not an unrealistic dev environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scalability: ++
&lt;/h3&gt;

&lt;p&gt;The scalability of a self-service namespace system is very good in both aspects: It is possible to scale up the resources of the namespaces because they are running in the cloud (it is also possible to limit developers to prevent excessive usage, of course). At the same time, it is also no problem to add additional users to the system, especially if it provides a &lt;a href="https://loft.sh/features/kubernetes-auth-sso"&gt;Single-Sign-On option&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Namespaces are an efficient way of providing many developers with a Kubernetes environment that can be flexibly scaled up or down.&lt;/p&gt;

&lt;h3&gt;
  
  
  Isolation/Stability: -
&lt;/h3&gt;

&lt;p&gt;Namespaces are a native solution for &lt;a href="https://loft.sh/blog/kubernetes-multi-tenancy-best-practices-guide/"&gt;Kubernetes multi-tenancy&lt;/a&gt; but the isolation is not perfect and rather a form of soft multi-tenancy. However, since the tenants (developers) are trusted, this is not necessarily a problem for development environments. &lt;br&gt;
Additionally, namespaces share the same underlying cluster, which means that all namespaces fail if the cluster is down, so the stability of the cluster is essential.&lt;/p&gt;

&lt;p&gt;Namespaces are a Kubernetes-native isolation solution, but it is certainly not perfect. However, if the underlying cluster is running solidly, namespaces are still a good solution for trusted engineers within an organization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost: o
&lt;/h3&gt;

&lt;p&gt;To get the self-service experience, you might need to buy self-service namespace software. Additionally, namespaces running in a cloud environment are not free as they also require cloud computing resources. However, the underlying cluster and its resources can be shared by many developers, which drives utilization up and prevents unnecessary redundancies. It is also easier to get a central overview of what is running idle, so these namespaces can be shut down. This process can even be automized by a &lt;a href="https://loft.sh/docs/self-service/sleep-mode"&gt;sleep mode&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Overall, namespaces are a very cost-efficient approach to provide developers with Kubernetes access.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Self-Service Virtual Clusters
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://loft.sh/blog/introduction-into-virtual-clusters-in-kubernetes/"&gt;Virtual clusters (vClusters)&lt;/a&gt; are a solution that lets you create Kubernetes clusters within a Kubernetes cluster. Like namespaces, virtual clusters run on a single physical cluster and can be created on-demand by developers if they have access to a &lt;a href="https://loft.sh/features/virtual-kubernetes-clusters"&gt;vCluster platform&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Developer Experience: ++
&lt;/h3&gt;

&lt;p&gt;The developer experience with virtual clusters is similar to namespaces. Developers can easily create them on-demand and are so independent of central IT but still do not have to manage the underlying cluster themselves. At the same time, vClusters feel like “real” clusters for developers, so they will usually not be limited by them at all.&lt;/p&gt;

&lt;p&gt;Therefore, the dev experience with vClusters is similarly good as with namespaces but even gives the developers more freedom to do and configure what they want.&lt;/p&gt;

&lt;h3&gt;
  
  
  Admin Experience: ++
&lt;/h3&gt;

&lt;p&gt;Considering the admin experience, it is again very similar for self-service namespaces and vClusters. After the initial setup, the management effort for admins is very limited, so they can focus on other tasks again. However, compared to namespaces, vClusters isolate users better and so make it less likely that developers can get the underlying cluster to crash. Additionally, most of the Kubernetes configuration and installation can happen in the vCluster so that the underlying cluster can be very simple and just has to provide the basic features, which makes the admins’ job even easier.&lt;/p&gt;

&lt;p&gt;A self-service vCluster platform thus also provides a very smooth admin experience once it has been set up properly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Flexibility/Realism: +
&lt;/h3&gt;

&lt;p&gt;Virtual Clusters run in the cloud, which makes them quite realistic dev environments, especially because the developers can configure them individually to fit their needs. However, vClusters are not exactly the same as real clusters, so the realism is not as perfect as with individual clusters.&lt;/p&gt;

&lt;p&gt;Overall, vClusters can be flexibly configured to meet the requirements of different use cases. Since they are a virtual construct, they are still some minor differences to physical clusters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scalability: ++
&lt;/h3&gt;

&lt;p&gt;The scalability of vClusters is as good as for namespaces. vClusters can have different and basically endless computing resources in the cloud. The self-service provisioning on a platform that runs on a single cluster also makes it possible to use vClusters with many engineers.&lt;/p&gt;

&lt;p&gt;A self-service vCluster solution will fulfill all needs in terms of scalability for development environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Isolation/Stability: o
&lt;/h3&gt;

&lt;p&gt;The isolation of virtual clusters is better than the isolation on a namespace-level, but vClusters are still a form of Kubernetes multi-tenancy and as such, the vClusters share a common physical cluster. A benefit of virtual clusters is that the underlying cluster can be very basic, which makes it easier to get it stable.&lt;/p&gt;

&lt;p&gt;Overall, the isolation of vClusters is decent and the stability of the whole system can be quite good. However, a lot of the stability is determined by the stability of the underlying cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost: o
&lt;/h3&gt;

&lt;p&gt;A virtual cluster platform is not free because it requires cloud computing resources and software for the platform. In this category, vClusters are again very similar to namespaces: The cluster sharing improves the utilization and makes it easier to get an overview and to shut down unused virtual clusters, which can again even be automized by a sleep mode.&lt;/p&gt;

&lt;p&gt;A virtual cluster platform is as cost-efficient as a namespace platform, but all cloud-based solutions will necessarily not be completely free.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to use which dev environment
&lt;/h2&gt;

&lt;p&gt;After having described the four different types of Kubernetes development environments, the question remains which environment is right for your situation.&lt;/p&gt;

&lt;p&gt;From my experience, many companies and engineers start with local dev environments. The fact that they are free and run on local computers reduces the initial hurdle as no complicated budget approvals are needed. Local environments are also a good solution for hobby developers and small applications but also for Kubernetes experts who know how to handle and set up these environments.&lt;/p&gt;

&lt;p&gt;As organizations progress on &lt;a href="https://dev.to/blog/the-journey-of-adopting-cloud-native-development/"&gt;their cloud-native journey&lt;/a&gt;, they want to roll out Kubernetes to more developers who do not have any experience with Kubernetes. These organizations often start with the “obvious” solution: Just give every developer an own cluster. After some time, they then often realize that this approach is very expensive and becomes more complex with a growing number of developers working with it. For this, the individual cloud-based cluster solution is often just a temporary solution unless the number of developers is relatively low and the cost so does not matter too much.&lt;/p&gt;

&lt;p&gt;To avoid the high cost and the management effort for larger teams, many organizations want to provide developers with either namespaces or virtual clusters (virtual clusters are relatively new, so namespaces are still more common). However, as these companies have realized that scalability of the approach matters a lot, they want to do this in an automized fashion and therefore either start developing their own internal Kubernetes platforms &lt;a href="https://www.youtube.com/watch?v=vLrxOhZ6Wrg"&gt;as Spotify did&lt;/a&gt; or just buy existing solutions, such as &lt;a href="https://loft.sh/"&gt;Loft&lt;/a&gt;. Thereby, it depends on the complexity of the application and the expertise and requirements of the developers if namespaces are sufficient or if virtual clusters are a better solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As more companies want their developers to work with Kubernetes, also more developers need to have access to a Kubernetes work environment. For this, there are several options that all have their strengths and weaknesses.&lt;/p&gt;

&lt;p&gt;While local development clusters are a good and cheap starting point, they are often not the right solution for inexperienced developers or larger organizations.&lt;/p&gt;

&lt;p&gt;These organizations then turn to the “obvious” solution of individual cloud-based clusters, which are unbeatable in terms of flexibility and realism but are also hard to manage for admins and can become very expensive.&lt;/p&gt;

&lt;p&gt;Finally, shared clusters, which are the basis for either self-service namespaces or virtual clusters, are a solution that combines cost-efficiency with a good developer and admin experience. Although these solutions are not free and require some initial setup effort, they are a long-term solution even for larger companies.&lt;/p&gt;



&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@rawfilm?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;RawFilm&lt;/a&gt; on &lt;a href="https://unsplash.com/?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>programming</category>
      <category>devops</category>
    </item>
    <item>
      <title>Virtual Kubernetes Clusters In Production by Daniel Thiry</title>
      <dc:creator>Levent Ogut</dc:creator>
      <pubDate>Mon, 22 Feb 2021 08:00:30 +0000</pubDate>
      <link>https://forem.com/loft/virtual-kubernetes-clusters-in-production-by-daniel-thiry-2j0</link>
      <guid>https://forem.com/loft/virtual-kubernetes-clusters-in-production-by-daniel-thiry-2j0</guid>
      <description>&lt;p&gt;The idea of &lt;a href="https://dev.to/blog/introduction-into-virtual-clusters-in-kubernetes/"&gt;virtual Kubernetes clusters (vClusters)&lt;/a&gt; is to spin up a fully-functional cluster within another Kubernetes clusters to provide an efficient abstraction and direct Kubernetes access on top of a shared underlying cluster.&lt;/p&gt;

&lt;p&gt;I have already described the &lt;a href="https://dev.to/blog/virtual-clusters-for-kubernetes-benefits-use-cases/"&gt;benefits and use cases use of such virtual clusters for development&lt;/a&gt;, and specifically for &lt;a href="https://dev.to/blog/kubernetes-virtual-clusters-as-development-environments/"&gt;cloud-native development&lt;/a&gt;, &lt;a href="https://dev.to/blog/kubernetes-virtual-clusters-for-ci-cd-testing/"&gt;CI/CD&lt;/a&gt;, and &lt;a href="https://dev.to/blog/kubernetes-virtual-clusters-for-ai-ml-experiments/"&gt;ML/AI experimentation&lt;/a&gt;. However, since vClusters are similarly flexible as regular Kubernetes clusters and namespaces, they can also be used in many situations apart from development.&lt;/p&gt;

&lt;p&gt;In this article, I want to describe some of these production use cases that I believe virtual clusters are most valuable for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Replacing clusters by vClusters to save cost
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev.to/blog/kubernetes-cost-savings/"&gt;Running fewer Kubernetes clusters is generally cheaper&lt;/a&gt; because fewer clusters lead to less redundancy and higher utilization. Moreover, they are easier to manage by the cluster admins.&lt;/p&gt;

&lt;p&gt;However, people are still not running everything on just one cluster for good reasons. Often, this reason is that you had to run everything in namespaces, which do not provide sufficient isolation in many situations. Compared to namespaces, vClusters improve the isolation of tenants and so facilitate the implementation of a harder form of &lt;a href="https://dev.to/blog/kubernetes-multi-tenancy-best-practices-guide/"&gt;Kubernetes multi-tenancy&lt;/a&gt;. This allows you to pool some parts of an application in one cluster that are currently separated due to isolation concerns. (Of course, this does not mean that you should run everything on just one cluster.)&lt;/p&gt;

&lt;p&gt;Another reason to separate applications on different clusters is that they require individual configurations of the cluster. While this is not possible with namespaces, virtual clusters can be configured differently and flexibly, e.g. it is even possible to use different Kubernetes versions for the virtual clusters. This again lets you pool applications reducing the number of clusters and thus saves cost.&lt;/p&gt;

&lt;p&gt;For this, a virtual cluster solution might also be interesting for hosting and cloud providers who want to provide their customers flexible Kubernetes environments but can also sell such an experience with vClusters at a lower price as not every customer will need an actual physical cluster but just full Kubernetes access.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I wrote a dedicated article about further options to &lt;a href="https://dev.to/blog/reduce-kubernetes-cost/"&gt;reduce your Kubernetes cost&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Replacing namespaces by vClusters to improve stability
&lt;/h2&gt;

&lt;p&gt;When it is not strictly necessary to separate (parts of) applications on clusters, namespaces are often used as a cheaper and easier approach to run the software. In these situations, the namespaces could be replaced by virtual clusters because they help to improve the security of the system due to their better isolation and provide a more flexible configuration of the different parts of the application.&lt;/p&gt;

&lt;p&gt;Virtual clusters Clusters thus do not only allow you to share clusters more often but also are useful to optimize your production system. Therefore, vClusters often are “the better namespace”.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reaching higher scalability
&lt;/h2&gt;

&lt;p&gt;If you run very large clusters with a lot of network traffic and load, you can reach a technical limit of scalability. Naturally, this mostly happens in production use cases with many users or customers.&lt;/p&gt;

&lt;p&gt;In such a situation, you usually need additional clusters to handle the huge amounts of traffic. However, just because, for example, the Kubernetes API server or the etcd of your cluster reach their limit, it does not have to be optimal to run additional clusters. Instead, it could be more efficient to run multiple API servers or etcds to handle the load.&lt;/p&gt;

&lt;p&gt;Since each virtual cluster has its own API server and etcd, it is possible to implement efficient &lt;a href="https://loft.sh/use-cases/kubernetes-cluster-sharding" rel="noopener noreferrer"&gt;Kubernetes cluster sharding&lt;/a&gt; with vClusters. This is possible as requests to a virtual cluster are generally handled by the virtual cluster itself, which reduces the load on the underlying cluster because only some requests are passed through and handled by it. You can so “spread” the load on different virtual clusters avoiding an early bottleneck issue from the physical cluster.&lt;/p&gt;

&lt;p&gt;Therefore, it is possible to push the technical limits of how much traffic your cluster can handle, which allows you to save costs and optimize your system by only adding clusters if it is really sensible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Providing managed products easier and cheaper
&lt;/h2&gt;

&lt;p&gt;While the first production use cases for virtual clusters were rather general, there are also more specific scenarios they are very useful for. One of these is the &lt;a href="https://loft.sh/use-cases/cloud-native-managed-products" rel="noopener noreferrer"&gt;provisioning of a managed product to your customers&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There are generally two main technical approaches to provide your customers with a Software-as-a-Service (SaaS) experience: &lt;a href="https://blog.scaleway.com/saas-multi-tenant-vs-multi-instance-architectures/" rel="noopener noreferrer"&gt;Multi-tenancy and multi-instances&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you provide your SaaS-product with a multi-tenancy approach, you run one instance of a system and a database that is shared by different customers. Here, vClusters can again help you by providing better tenant isolation and by allowing more scalability to improve the efficiency of your system.&lt;/p&gt;

&lt;p&gt;With the multi-instance approach, every customer gets an own instance of your system and database. Virtual clusters allow you to do this quite easily as you can replicate your system in virtual clusters. So, every customer gets a dedicated instance of your software but all instances are still running on a single cluster, which reduces the management effort for your system. Since no new cluster has to be started for each customer, the setup of a new instance is also faster and the whole system runs more efficiently as the underlying computing resources can be shared and thus do not run idly.&lt;/p&gt;

&lt;p&gt;Overall, vClusters are therefore useful to provide SaaS products to your customers no matter which architectural approach you want to use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running interactive demos fast and easy
&lt;/h2&gt;

&lt;p&gt;A last production use case for virtual Kubernetes clusters is &lt;a href="https://loft.sh/use-cases/live-demos" rel="noopener noreferrer"&gt;spinning up interactive demos for your product&lt;/a&gt;. Even though demos are not really production systems, their use and functionality are closer to classical production scenarios than to development, which is why I want to mention them here.&lt;/p&gt;

&lt;p&gt;If you sell an on-premise product, your customers (especially in B2B) usually want to see a demo before they decide to buy it or even before they set it up themselves on their infrastructure. Such demo versions of your product can either be used by salespeople or by the customers themselves.&lt;/p&gt;

&lt;p&gt;In any case, it usually makes sense if the demo always starts in the same clean state and is available anytime, i.e. does not have to be provisioned by admins first.&lt;br&gt;
These requirements make them a very good use case for virtual clusters: vClusters can be started within just a few seconds from scratch instead of several minutes that “real” clusters need to be started in cloud environments. They are also always available if the underlying cluster is running.&lt;/p&gt;

&lt;p&gt;Additionally, as there is only one underlying cluster needed for many vClusters, several salespeople (or customers) can start a demo independently on-demand so that there is not only one demo application that always has to be reset after each demonstration. When the demo is finished, the whole demo environment, i.e. the vCluster, can thus also be deleted, a process that even can be automized with an &lt;a href="https://loft.sh/docs/self-service/sleep-mode" rel="noopener noreferrer"&gt;automatic delete setting&lt;/a&gt;. Alternatively, it is also possible to keep the environment if it shall be used again to continue at the same stage.&lt;/p&gt;

&lt;p&gt;In sum, running demos in virtual clusters is thus cheap due to the shared infrastructure and the automatic cleanup and easy due to the independent environments that are available on-demand. This makes the use of demos during your sales activities more attractive, which will ultimately help to drive sales and revenues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Virtual clusters have many benefits that make them useful not only for development scenarios but also for various production settings. They can be used to improve your system technically by providing more isolation and thus stability or by increasing the limits to the scalability of your clusters. Additionally, virtual clusters are efficient in reducing your cost if you can replace whole clusters with vClusters. They can also support you in solving the practical problems of how to efficiently provide managed products and product demos to your customers.&lt;/p&gt;

&lt;p&gt;Overall and given the abundance of Kubernetes applications, I believe there are also many more use cases for virtual clusters during production that even have not been explored yet.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://www.pexels.com/@wolfgang-1002140?utm_content=attributionCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=pexels" rel="noopener noreferrer"&gt;Wolfgang&lt;/a&gt; from &lt;a href="https://www.pexels.com/photo/photo-of-people-watching-a-concert-2747449/?utm_content=attributionCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=pexels" rel="noopener noreferrer"&gt;Pexels&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Virtual Clusters For Kubernetes - Benefits &amp; Use Cases</title>
      <dc:creator>Levent Ogut</dc:creator>
      <pubDate>Fri, 19 Feb 2021 08:09:58 +0000</pubDate>
      <link>https://forem.com/loft/virtual-clusters-for-kubernetes-benefits-use-cases-16hb</link>
      <guid>https://forem.com/loft/virtual-clusters-for-kubernetes-benefits-use-cases-16hb</guid>
      <description>&lt;p&gt;Virtual Kubernetes Clusters (vClusters) have the potential to bring Kubernetes adoption to the next level. They are running in a physical Kubernetes cluster and can be used in the same way as normal clusters, but still are just a virtual construct. (Learn more about &lt;a href="https://loft.sh/blog/introduction-into-virtual-clusters-in-kubernetes/" rel="noopener noreferrer"&gt;how virtual Clusters work here&lt;/a&gt;). Similar to Virtual Machines that revolutionized the use of physical servers, virtual Kubernetes clusters have some benefits compared to physical clusters, which make them particularly useful for some scenarios.&lt;/p&gt;

&lt;p&gt;In this article, I will describe the benefits of virtual Kubernetes clusters and provide some use cases in which vClusters are advantageous to other solutions such as many individual clusters or namespace-based multi-tenancy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Virtual Kubernetes Clusters
&lt;/h2&gt;

&lt;p&gt;The benefits of virtual clusters for Kubernetes are mainly based on two characteristics: Sharing of a physical cluster and isolation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cluster Sharing / Multi-Tenancy
&lt;/h3&gt;

&lt;p&gt;Since vClusters are a virtual abstraction within Kubernetes, it is possible to run many vClusters on just a single physical cluster, which has the following advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reduced Management Effort.&lt;/strong&gt; Since there is only one physical cluster to maintain, the administrative effort is significantly reduced by virtual clusters. This becomes especially clear when comparing it to another alternative that would lead to a similar outcome from a user perspective: Instead of running virtual clusters in one physical cluster, it is possible to run many physical clusters that all have to be maintained, which can become infeasible pretty fast even in only mid-sized teams. Additionally, the physical cluster can be configured in a pretty "basic" way without extensive additional installations as most of this will happen on the level of the vCluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reduced Cost.&lt;/strong&gt; Besides the reduced management effort that also results in cost reductions, virtual clusters are also more efficient in utilizing computing resources because the resources are shared by the tenants. Again, similar efficiency improvements were gained by introducing virtual machines to a physical server infrastructure. This cost efficiency is further increased by the disposable nature of virtual cluster, so they can be “thrown away” (shut off) when they are not needed. Alternatively, they can be “put to sleep” (scaled down), a process that can even be automized, e.g. with a sleep mode that is provided by solutions such as &lt;a href="https://loft.sh/" rel="noopener noreferrer"&gt;Loft&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Isolation / Hard Multi-Tenancy
&lt;/h3&gt;

&lt;p&gt;Since virtual Kubernetes clusters are providing a harder form of multi-tenancy, i.e. users are strictly isolated from each other, they have some additional benefits especially in comparison to a namespace-based multi-tenancy approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Stable System.&lt;/strong&gt; Using virtual Kubernetes clusters does not compromise the stability of the system. Even if a virtual cluster fails, the underlying physical cluster is usually not affected by this. (Only in some extreme cases, the failure of a virtual cluster would lead to a failure of the physical cluster, which is again similar to a VM that rarely can lead to a break in the underlying physical machine.) This is regardless of the source of the error which can come from within the cluster, e.g. an engineer accidentally breaks something, or from outside, e.g. a malicious attack on the system. This makes the whole system more resilient and helps to implement a reasonable microservice architecture with a true separation of concerns for the individual services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full Flexibility.&lt;/strong&gt; While every engineer is working or every microservice is running on the same physical cluster, their virtual clusters are completely independent. This allows the vClusters to also be configured in very different ways. For example, the virtual clusters can have different Kubernetes versions or different API server configurations. This again enables engineers to work freely and to use whatever is best for their use case without having to consider other requirements or the underlying physical Kubernetes cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall, virtual Kubernetes clusters are the only approach to combine efficiency with a stable, flexible system due to its hard multi-tenancy with Kubernetes-native resources. The alternative of using many individual clusters is only able to resolve the isolation issue but creates a huge cost burden, while the approach of namespace-based soft multi-tenancy keeps cost reasonable but can only provide limited stability and flexibility. Virtual clusters are thus so far the only option for companies that want to have the best of the two worlds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;p&gt;Since virtual clusters are mostly behaving like regular Kubernetes clusters, their scope of application is similarly broad. For this, I will concentrate on three main use cases that are not covered well by alternative solutions with namespaces or a multitude of physical clusters.&lt;/p&gt;

&lt;h3&gt;
  
  
  CI/CD &amp;amp; Testing
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Requirements:&lt;/strong&gt; For CI/CD and testing scenarios in a Kubernetes environment, the engineers need to have access to Kubernetes whenever they need it. To keep cost low, the environment should only be running if it is actually used. Engineers should also be able to modify the Kubernetes configuration to get more realistic tests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; Creating a Kubernetes cluster takes some time, even in public clouds. Since computing resources are costly and many CI/CD pipelines are billed by the minute, it is often not feasible to wait 30 minutes until a cluster is started. This would also slow down the feedback loop for the engineer’s actions and interrupts their workflow. Therefore, a cluster is often simply shared by engineers for testing, which again can lead to waiting times if a colleague is using it at the same time. It is also highly inefficient as such a cluster is always running and costing money even if no tests or pipelines are running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Virtual Cluster Solution:&lt;/strong&gt; Virtual clusters have the advantage that they can be started in a few seconds, so it is possible to create them on-demand by the engineers. Since they are fully fledged clusters that are only used by one engineer or application at a time, they can be freely configured and adapted to the individual situation. After the completion of a test or CI/CD process, they can also be thrown away without any issues, which keeps the cost for computing resources as low as possible. For this, virtual Kubernetes clusters are a perfect fit for CI/CD and testing scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud-Native Development
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Requirements:&lt;/strong&gt; More and more companies want to &lt;a href="https://loft.sh/blog/the-journey-of-adopting-cloud-native-development/" rel="noopener noreferrer"&gt;give developers a direct access to Kubernetes already during development&lt;/a&gt;. For this, the engineers need to have a Kubernetes access they can work and experiment with, while keeping the cost and management effort for these environments low.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; Individual clusters for every engineer are often not feasible as this is an expensive solution that requires some Kubernetes knowledge and leads to high maintenance effort. For this, namespace-based multi-tenancy is already used sometimes. However, the lack of a strong isolation leaves the engineers with some risk of breaking the whole system. They further usually do not have an admin access to the cluster, so they are limited in terms of configuration options.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Virtual Cluster Solution:&lt;/strong&gt; Again, virtual Kubernetes clusters can be started by the engineers on-demand and without much prior Kubernetes knowledge. The engineers are then isolated very well from each other and work in secure dev sandboxes without the fear of affecting others. At the same time, this approach is very cost-efficient because only a low management effort is needed with a single physical cluster, the resources can be shared, and the virtual clusters can be easily deleted. Instead of deleting the virtual clusters, it is also possible to put them to sleep with a sleep mode such as the one &lt;a href="https://loft.sh/docs/sleep-mode/basics" rel="noopener noreferrer"&gt;Loft&lt;/a&gt; provides. This allows developers to resume their work where they ended while no computing cost are created in the meantime.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI+ML experiments
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Requirements:&lt;/strong&gt; For Artificial Intelligence and Machine Learning applications, a lot of computing resources are often needed. To run experiments, engineers thus need access to a cloud environment with a lot of power. They also need to be able to easily replicate these environments to rerun the experiment with different parameters or to run them in parallel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; Due to the high resource requirements, cost is a major issue for AI and ML experiments. It is thus not reasonable to have an experimentation environment always running as this would create high cost even if it is not used. From a workflow perspective, a shared environment leads to waiting times for engineers if another experiment is still running. The same goes for individual clusters that take long to start and so also interrupts the engineers’ workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Virtual Cluster Solution:&lt;/strong&gt; Virtual clusters are a great alternative for artificial intelligence and machine learning engineers who want to run experiments. They can be created within seconds when they are needed and are very well replicable, so it is even possible to run experiments in parallel, which can accelerate many workflows. Even though vClusters are almost instantly available, they are not running idle as they can be easily shut off after the experiment is over, so the cost for these expensive experiments are minimized.&lt;/p&gt;

&lt;h2&gt;
  
  
  Which Virtual Cluster Solutions exist
&lt;/h2&gt;

&lt;p&gt;Virtual Kubernetes clusters are still a very new topic, but they are already available today. The &lt;a href="https://github.com/kubernetes-sigs/multi-tenancy" rel="noopener noreferrer"&gt;multi-tenancy working group&lt;/a&gt; has presented an experimental solution in this area. &lt;a href="https://github.com/ibuildthecloud/k3v" rel="noopener noreferrer"&gt;k3v by Darren Shepherd&lt;/a&gt; is another proof of concept implementation in the open-source community.&lt;/p&gt;

&lt;p&gt;A more advanced &lt;a href="https://loft.sh/" rel="noopener noreferrer"&gt;platform solution for virtual Kubernetes clusters is Loft&lt;/a&gt;. Loft is a commercial solution that provides some additional features on top of the virtual cluster solution such as a sleep mode and a user management in a complete platform that can be used off-the-shelf.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The concept of virtual Kubernetes clusters is still very new but could be a big advancement for Kubernetes and similarities to the groundbreaking move from physical servers to virtual machines are striking. The combination of a shared cluster with a strong isolation merges efficiency with resilience and flexibility. This makes Kubernetes also more attractive for use cases where it is currently still hard to implement, such as CI/CD, testing, cloud-native development and AI/ML experimentation, and could thus spur Kubernetes adoption even further.&lt;/p&gt;



&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@kelvin1987?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Kelvin Ang&lt;/a&gt; on &lt;a href="https://unsplash.com/?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Kubernetes Liveness Probes - Examples &amp; Common Pitfalls</title>
      <dc:creator>Levent Ogut</dc:creator>
      <pubDate>Mon, 15 Feb 2021 11:14:52 +0000</pubDate>
      <link>https://forem.com/loft/kubernetes-liveness-probes-examples-common-pitfalls-4mll</link>
      <guid>https://forem.com/loft/kubernetes-liveness-probes-examples-common-pitfalls-4mll</guid>
      <description>&lt;p&gt;Kubernetes has disrupted traditional deployment methods and has become very popular. Although it is a great platform to deploy to, it brings complexity and challenges as well. Kubernetes manages nodes and workloads seamlessly, and one of the great features of this containerized deployment platform is that of self-healing. For self-healing on the container level, we need health checks called probes in Kubernetes unless we depend on exit codes.&lt;/p&gt;

&lt;p&gt;Liveness probes check if the pod is healthy, and if the pod is deemed unhealthy, it will trigger a restart; this action is different than the action of &lt;a href="https://loft.sh/blog/kubernetes-readiness-probes-examples-common-pitfalls/" rel="noopener noreferrer"&gt;Readiness Probes I discussed in my previous post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let's look at the components of the probes and dive into how to configure and troubleshoot Liveness Probes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Probes
&lt;/h2&gt;

&lt;p&gt;Probes are health checks that are executed by kubelet.&lt;/p&gt;

&lt;p&gt;All probes have five parameters that are crucial to configure.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;initialDelaySeconds&lt;/strong&gt;: Time to wait after the container starts. (default: 0)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;periodSeconds&lt;/strong&gt;: Probe execution frequency (default: 10)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;timeoutSeconds&lt;/strong&gt;: Time to wait for the reply (default: 1)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;successThreshold&lt;/strong&gt;: Number of successful probe executions to mark the container healthy (default: 1)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;failureThreshold&lt;/strong&gt;: Number of failed probe executions to mark the container unhealthy (default: 3)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You need to analyze your application's behavior to set these probe parameters.&lt;/p&gt;

&lt;p&gt;There are three types of probes:&lt;/p&gt;

&lt;h3&gt;
  
  
  Exec Probe
&lt;/h3&gt;

&lt;p&gt;Exec probe executes a command inside the container without a shell. The command's exit status determines a healthy state - zero is healthy; anything else is unhealthy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;        &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
          &lt;span class="na"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;successThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;exec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cat&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/etc/nginx/nginx.conf&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  TCP Probe
&lt;/h3&gt;

&lt;p&gt;TCP probe checks if a TCP connection can be opened on the port specified. An open port is deemed a success, a closed port or reset is deemed unsuccessful.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;        &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
          &lt;span class="na"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;successThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;tcpSocket&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  HTTP Probe
&lt;/h3&gt;

&lt;p&gt;HTTP probe makes an HTTP call, and the status code determines the healthy state, between including 200 and excluding 400 is deemed success. Any status code apart from those mentioned is deemed unhealthy.&lt;/p&gt;

&lt;p&gt;Here are HTTP Probes additional parameters to configure.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;host&lt;/strong&gt;: IP address to connect to (default: pod IP)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;scheme&lt;/strong&gt;: HTTP scheme (default: HTTP)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;path&lt;/strong&gt;: HTTP path to call to&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;httpHeaders&lt;/strong&gt;: Any custom headers you want to send.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;port&lt;/strong&gt;: Connection port.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Tip: If Host header is required, then use httpHeader.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;An example of an HTTP probe.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;        &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
          &lt;span class="na"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;successThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;scheme&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTP&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
            &lt;span class="na"&gt;httpHeaders&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapplication1.com&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Liveness Probes in Kubernetes
&lt;/h2&gt;

&lt;p&gt;Kubelet executes liveness probes to see if the pod needs a restart. For example, let's say we have a microservice written in Go, and this microservice has some bugs on some part of the code, which causes a freeze in runtime. To avoid hitting the bug, we can configure a liveness probe to determine if the microservice is in a frozen state. This way, the microservice container will be restarted and come to a pristine condition.&lt;/p&gt;

&lt;p&gt;If your application gracefully exits when encountering such an issue, you won't necessarily need to configure liveness probes, but there can still be bugs you don't know about. The pod will be restarted as per the configured/default restart policy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Pitfalls for Liveness Probes
&lt;/h2&gt;

&lt;p&gt;Probes only determine the health by the probe answers, and they are not aware of the system dynamics of our microservice/application. If for any reason, probe replies are delayed for more than &lt;strong&gt;periodSeconds&lt;/strong&gt; times &lt;strong&gt;failureThreshold&lt;/strong&gt; microservice/application will be determined unhealthy, and a restart of the pod will be triggered. Hence it is important to configure the parameters per application behavior.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cascading Failures
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://loft.sh/blog/kubernetes-readiness-probes-examples-common-pitfalls/" rel="noopener noreferrer"&gt;Similar to readiness probes&lt;/a&gt;, liveness probes also can create a cascading failure if you misconfigure it. If the health endpoint has external dependencies or any other condition that can prevent an answer to be delivered, it can create a cascading failure; therefore, it is of paramount importance to configure the probe considering this behavior.&lt;/p&gt;

&lt;h3&gt;
  
  
  Crash Loop
&lt;/h3&gt;

&lt;p&gt;Let's assume that our application needs to read a large amount of data into cache once in a while; unresponsiveness at this time also might cause a false positive because the probe might fail. In this case, failure of the liveness probe will restart the container, and most probably, it will enter a continuous cycle of restarts. In such a scenario a Readiness Probe might be more suitable to use, the pod will only be removed from service to execute the maintenance tasks, and once it is ready to take traffic, it can start responding to the probes.&lt;/p&gt;

&lt;p&gt;Liveness endpoints on our microservice -that probes will hit- should check absolute minimum requirements that shows the application is running. This way, liveness checks would succeed, and the pod will not be restarted, and we ensure the service traffic flows as it should.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example: Sample Nginx Deployment
&lt;/h2&gt;

&lt;p&gt;We will deploy Nginx as a sample app. below is the deployment and service configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k8s-probes&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
        &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
          &lt;span class="na"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;successThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;scheme&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTP&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
            &lt;span class="na"&gt;httpHeaders&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapplication1.com&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Write this configuration to a file called  k8s-probes-deployment.yaml, and apply it with &lt;code&gt;kubectl apply -f k8s-probes-deployment.yaml&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-http-port&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;sessionAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;None&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, write this configuration to a file called k8s-probes-svc.yaml and apply it with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; k8s-probes-svc.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Troubleshooting Liveness Probes
&lt;/h2&gt;

&lt;p&gt;There is no specific endpoint for the Liveness Probe, and we should use &lt;code&gt;kubectl describe pods &amp;lt;POD_NAME&amp;gt;&lt;/code&gt; command to see events and current status.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we can see our pod is in a running state, and it is ready to receive traffic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                         READY   STATUS    RESTARTS   AGE
k8s-probes-7d979f58c-vd2rv   1/1     Running   0          6s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's check the applied configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; kubectl describe pods k8s-probes-7d979f58c-vd2rv | grep Liveness
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we can see the parameters we have configured.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    Liveness:       http-get http://:80/ delay=5s timeout=1s period=5s #success=1 #failure=1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's look at the events:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  45s   default-scheduler  Successfully assigned default/k8s-probes-7d979f58c-vd2rv to k8s-probes
  Normal  Pulling    44s   kubelet            Pulling image "nginx"
  Normal  Pulled     43s   kubelet            Successfully pulled image "nginx" in 1.117208685s
  Normal  Created    43s   kubelet            Created container nginx
  Normal  Started    43s   kubelet            Started container nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, there is no indication of failure nor success; for success conditions, there will be no event recorded.&lt;/p&gt;

&lt;p&gt;Now let's change livenessProbe.httpGet.path to "/do-not-exists," and take a look at the pod status.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After changing the path, liveness probes will fail, and the container will be restarted.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                          READY   STATUS    RESTARTS   AGE
k8s-probes-595bcfdf57-428jt   1/1     Running   4          74s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see that container has been restarted four times.&lt;/p&gt;

&lt;p&gt;Let's look at the events.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  53s                default-scheduler  Successfully assigned default/k8s-probes-595bcfdf57-428jt to k8s-probes
  Normal   Pulled     50s                kubelet            Successfully pulled image "nginx" in 1.078926208s
  Normal   Pulled     42s                kubelet            Successfully pulled image "nginx" in 978.826238ms
  Normal   Pulled     32s                kubelet            Successfully pulled image "nginx" in 971.627126ms
  Normal   Pulling    23s (x4 over 51s)  kubelet            Pulling image "nginx"
  Normal   Pulled     22s                kubelet            Successfully pulled image "nginx" in 985.155098ms
  Normal   Created    22s (x4 over 50s)  kubelet            Created container nginx
  Normal   Started    22s (x4 over 50s)  kubelet            Started container nginx
  Warning  Unhealthy  13s (x4 over 43s)  kubelet            Liveness probe failed: HTTP probe failed with statuscode: 404
  Normal   Killing    13s (x4 over 43s)  kubelet            Container nginx failed liveness probe, will be restarted
  Warning  BackOff    13s                kubelet            Back-off restarting failed container
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see above, "Liveness probe failed: HTTP probe failed with status code: 404", indicates probe failed with HTTP code 404; the status code will also aid in troubleshooting. Just after that, the kubelet informs us that it will restart the container.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Kubernetes liveness probes are life savers when our application is in an undetermined state; they return the application into a pristine condition by restarting the container. However, it is very important that they need to be configured correctly. Of course, there is no one correct way; it all depends on your application and how you want Kubernetes to act in each particular failure scenario. Set values accordingly and test the values through live case scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://loft.sh/blog/kubernetes-startup-probes-examples-common-pitfalls/" rel="noopener noreferrer"&gt;Kubernetes Start Up Probes - Examples &amp;amp; Common Pitfalls&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://loft.sh/blog/kubernetes-readiness-probes-examples-common-pitfalls/" rel="noopener noreferrer"&gt;Kubernetes Readiness Probes - Examples &amp;amp; Common Pitfalls&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#probe-v1-core" rel="noopener noreferrer"&gt;Kubernetes Core Probe Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="noopener noreferrer"&gt;Configure Liveness, Readiness and Startup Probes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="noopener noreferrer"&gt;Kubernetes Container probes Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="noopener noreferrer"&gt;Container Lifecycle Hooks Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@giggiulena?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Mario Caruso&lt;/a&gt; on &lt;a href="https://unsplash.com/@giggiulena?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>containers</category>
      <category>docker</category>
    </item>
    <item>
      <title>Kubernetes Startup Probes - Examples &amp; Common Pitfalls</title>
      <dc:creator>Levent Ogut</dc:creator>
      <pubDate>Wed, 10 Feb 2021 21:30:11 +0000</pubDate>
      <link>https://forem.com/loft/kubernetes-startup-probes-examples-common-pitfalls-13n8</link>
      <guid>https://forem.com/loft/kubernetes-startup-probes-examples-common-pitfalls-13n8</guid>
      <description>&lt;p&gt;Kubernetes brought an excellent deployment platform to work on. Even monolithic applications can be run in a container. For some of these monolithic applications and for some microservices, a slow start is a problem. Even if we configure readiness and liveness probes using initialDelaySeconds, this is not an ideal way to do this. For this specific problem, startup probes are developed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Probes
&lt;/h2&gt;

&lt;p&gt;Probes are executed by kubelet to determine pods' health.&lt;/p&gt;

&lt;p&gt;All three types of probes have common settings to configure.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;initialDelaySeconds&lt;/strong&gt;: How many seconds to wait after the container has started (default: 0)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;periodSeconds&lt;/strong&gt;: Wait time between probe executions (default: 10)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;timeoutSeconds&lt;/strong&gt;: Timeout of the probe (default: 1)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;successThreshold&lt;/strong&gt;: Threshold needed to mark the container healthy. (default: 1)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;failureThreshold&lt;/strong&gt;: Threshold needed to mark the container unhealthy. (default: 3)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Configuring these parameters vital and crucial.&lt;/p&gt;

&lt;h3&gt;
  
  
  Exec Probe
&lt;/h3&gt;

&lt;p&gt;Exec probe runs a command inside the container as a health check; the command's exit code determines the success.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;        &lt;span class="na"&gt;startupProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
          &lt;span class="na"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;successThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;exec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cat&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/etc/nginx/nginx.conf&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  TCP Probe
&lt;/h3&gt;

&lt;p&gt;TCP probe checks if the specified port is open or not; an open port points to success.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;        &lt;span class="na"&gt;startupProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
          &lt;span class="na"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;successThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;tcpSocket&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  HTTP Probe
&lt;/h3&gt;

&lt;p&gt;HTTP probe sends an HTTP GET request with defined parameters.&lt;/p&gt;

&lt;p&gt;HTTP Probe has additional options to configure.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;host&lt;/strong&gt;: Host/IP to connect to (default: pod IP)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;scheme&lt;/strong&gt;: Scheme to use when making the request (default: HTTP)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;path&lt;/strong&gt;: Path&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;httpHeaders&lt;/strong&gt;: An array of headers defined as header/value tuples.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;port&lt;/strong&gt;: Port to connect to&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Tip: Host header should be set in httpHeaders.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;        &lt;span class="na"&gt;startupProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
          &lt;span class="na"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;successThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;scheme&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTP&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
            &lt;span class="na"&gt;httpHeaders&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapplication1.com&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Startup Probes in Kubernetes
&lt;/h2&gt;

&lt;p&gt;Startup probes are a newly developed feature supported as a beta in Kubernetes v.1.18.&lt;br&gt;
These probes are very useful on slow-start applications; it is much better than increasing initialDelaySeconds on readiness or liveness probes.&lt;br&gt;
Startup probe allows our application to become ready, joined with readiness and liveness probes, it can dramatically increase our applications' availability.&lt;/p&gt;
&lt;h2&gt;
  
  
  Example: Sample Nginx Deployment
&lt;/h2&gt;

&lt;p&gt;Let's deploy Nginx as a sample app and see startup probes in action.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k8s-probes&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
        &lt;span class="na"&gt;startupProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
          &lt;span class="na"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;successThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
          &lt;span class="na"&gt;exec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cat&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/etc/nginx/nginx.conf&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a file and paste the above, if you named the file k8s-probes-deployment.yaml and apply it with &lt;code&gt;kubectl apply -f k8s-probes-deployment.yaml&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-http-port&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;sessionAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;None&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a file and paste the above, if you named the file k8s-probes-svc.yaml and apply it with &lt;code&gt;kubectl apply -f k8s-probes-svc.yaml&lt;/code&gt; command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Pitfalls for Startup Probes
&lt;/h2&gt;

&lt;p&gt;Although it is great to have such a probe, especially on a legacy application or any application that might take a while to become ready, it is quite important that parameters must be correctly configured. Otherwise, it can break our application and availability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Restart Loop
&lt;/h3&gt;

&lt;p&gt;Startup probes - if misconfigured- can cause a loop of restarts. Let's assume we have an application running on java, and this application takes a while to become ready. If we don't allow enough time for the startup probe to get a successful response, the kubelet might restart the container prematurely, causing a loop of restarts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting Startup Probes
&lt;/h2&gt;

&lt;p&gt;After applying the deployment file, we should see the pod is up and running; let's have a look.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see that the pod is up and running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                          READY   STATUS    RESTARTS   AGE
k8s-probes-6cbf7ccbf8-97hz5   1/1     Running   0          7s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Have a look at the events using the &lt;code&gt;kubectl describe pods&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl describe pods k8s-probes-6cbf7ccbf8-97hz5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's scroll down to events.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/k8s-probes-6cbf7ccbf8-97hz5 to k8s-probes
  Normal  Pulling    2s    kubelet            Pulling image "nginx"
  Normal  Pulled     1s    kubelet            Successfully pulled image "nginx" in 925.688112ms
  Normal  Created    1s    kubelet            Created container nginx
  Normal  Started    1s    kubelet            Started container nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the probe was successful, and no error or warning event is recorded.&lt;/p&gt;

&lt;p&gt;Let's check the applied configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl describe pods k8s-probes-6cbf7ccbf8-qcpt7 | grep Startup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see that the startup probe is configured with the parameters we have set.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    Startup:        exec [cat /etc/nginx/nginx.conf] delay=1s timeout=1s period=2s #success=1 #failure=1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now change the command parameter to &lt;code&gt;/etc/nginx/nginx.conf-dont-exists&lt;/code&gt; on the deployment file and apply with &lt;code&gt;kubectl apply -f k8s-probes-deployment.yaml.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let's check the events of the pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl describe pods k8s-probes-5fcc896b6f-97wpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's scroll down to events.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
Events:
  Type     Reason     Age                     From               Message
  ----     ------     ----                    ----               -------
  Normal   Scheduled  13m                     default-scheduler  Successfully assigned default/k8s-probes-5fcc896b6f-97wpg to k8s-probes
  Normal   Pulled     13m                     kubelet            Successfully pulled image "nginx" in 944.990287ms
  Normal   Pulled     12m                     kubelet            Successfully pulled image "nginx" in 972.83673ms
  Normal   Pulled     12m                     kubelet            Successfully pulled image "nginx" in 958.559546ms
  Normal   Pulled     11m                     kubelet            Successfully pulled image "nginx" in 1.056812046s
  Normal   Created    11m (x4 over 13m)       kubelet            Created container nginx
  Normal   Started    11m (x4 over 13m)       kubelet            Started container nginx
  Warning  Unhealthy  11m (x4 over 13m)       kubelet            Startup probe failed: cat: /etc/nginx/nginx.conf-dont-exists: No such file or directory
  Normal   Pulling    9m44s (x5 over 13m)     kubelet            Pulling image "nginx"
  Normal   Killing    8m21s (x5 over 12m)     kubelet            Container nginx failed startup probe, will be restarted
  Warning  BackOff    3m41s (x13 over 7m16s)  kubelet            Back-off restarting the failed container
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see startup probe executed the command; however, due to a non-existent file, the command returned a non-zero exit code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Startup probes are very helpful to determine our application has started correctly.&lt;br&gt;
We have explored the config options on a sample Nginx application; we have checked the configuration file as a probe; this is an example of a dynamic configuration generation application, i.e., assume that Nginx configuration is generated dynamically using etcd or similar key-value store.&lt;br&gt;
Although very useful, we have also explored the side effects of the startup probes. Make sure you allow enough time to let your application startup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://loft.sh/blog/kubernetes-liveness-probes-examples-and-common-pitfalls" rel="noopener noreferrer"&gt;Kubernetes Liveness Probes - Examples &amp;amp; Common Pitfalls&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://loft.sh/blog/kubernetes-readiness-probes-examples-and-common-pitfalls" rel="noopener noreferrer"&gt;Kubernetes Readiness Probes - Examples &amp;amp; Common Pitfalls&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#probe-v1-core" rel="noopener noreferrer"&gt;Kubernetes Core Probe Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="noopener noreferrer"&gt;Configure Liveness, Readiness and Startup Probes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="noopener noreferrer"&gt;Kubernetes Container probes Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="noopener noreferrer"&gt;Container Lifecycle Hooks Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@bradencollum?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Braden Collum&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/start?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
