<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kubestack</title>
    <description>The latest articles on Forem by Kubestack (@kubestack).</description>
    <link>https://forem.com/kubestack</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kubestack"/>
    <language>en</language>
    <item>
      <title>Goodbye Cloud, Hello CLI: Sunsetting Kubestack Cloud</title>
      <dc:creator>Philipp Strube</dc:creator>
      <pubDate>Tue, 09 May 2023 19:53:11 +0000</pubDate>
      <link>https://forem.com/kubestack/goodbye-cloud-hello-cli-sunsetting-kubestack-cloud-12l4</link>
      <guid>https://forem.com/kubestack/goodbye-cloud-hello-cli-sunsetting-kubestack-cloud-12l4</guid>
      <description>&lt;p&gt;I've recently released a major update for Kubestack, the &lt;a href="https://www.kubestack.com/"&gt;Terraform framework for Kubernetes platform engineering teams&lt;/a&gt;. This update moves all functionality previously provided by Kubestack Cloud into the &lt;code&gt;kbst&lt;/code&gt; CLI.&lt;/p&gt;

&lt;p&gt;I decided to make this change, because Kubestack Cloud was only able to provide a better developer experience on day one. But, once exported to Terraform, the UI was not helpful any more on day two and all following days.&lt;/p&gt;

&lt;p&gt;But my goal is to improve the developer experience and day-to-day lives of platform engineering teams at all times. This latest &lt;a href="https://github.com/kbst/kbst/releases/tag/v0.2.1"&gt;&lt;code&gt;kbst&lt;/code&gt; release&lt;/a&gt; is a major step towards achieving this goal.&lt;/p&gt;

&lt;p&gt;If this is the first time you hear about Kubestack Cloud: Kubestack Cloud was a browser based UI and allowed users to design a Kubernetes platform by following a step-by-step wizard and then exporting and downloading the designed platform's Terraform code.&lt;/p&gt;

&lt;p&gt;However, the disconnect between the UI and the code in the repository on a developer's local machine, diminished the value of Kubestack Cloud on day two and beyond. To address this issue, I moved all this functionality into the &lt;code&gt;kbst&lt;/code&gt; CLI, where access to the local code is easier.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;kbst&lt;/code&gt; CLI, which previously also only scaffolded new repositories, now has CRUD (create, read, update, delete) functionality for clusters, node-pools, and services. This means users can use the CLI to scaffold Terraform code to add or remove clusters, node-pools, or services inside their existing Kubestack repositories.&lt;/p&gt;

&lt;p&gt;If you want to see the new CLI in action, give the &lt;a href="https://www.kubestack.com/framework/tutorial/"&gt;updated tutorial a try&lt;/a&gt; or read the documentation on adding and removing &lt;a href="https://www.kubestack.com/framework/documentation/clusters/"&gt;cluster modules&lt;/a&gt;, &lt;a href="https://www.kubestack.com/framework/documentation/node-pools/"&gt;node pool modules&lt;/a&gt; or &lt;a href="https://www.kubestack.com/framework/documentation/services/"&gt;service modules&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But if you'd like to learn more about how this works under the hood, keep reading.&lt;/p&gt;

&lt;h2&gt;
  
  
  How this works
&lt;/h2&gt;

&lt;p&gt;If you're already familiar with Kubestack, you know that Kubestack repositories follow a convention-over-configuration approach to define the clusters, node pools, and services that make up a Kubernetes platform in a single Terraform codebase. At the root of each repository, there are several &lt;code&gt;.tf&lt;/code&gt; files that follow a specific naming convention. These files contain module calls that define each platform component.&lt;/p&gt;

&lt;p&gt;To add or remove components, or update the versions of existing component modules, the &lt;code&gt;kbst&lt;/code&gt; CLI parses the necessary subset of Terraform code to understand the components of the platform. You can list the Kubestack component modules it discovered using the &lt;code&gt;kbst list&lt;/code&gt; command. By appending &lt;code&gt;--all&lt;/code&gt; to the list command, you can also see any non-Kubestack modules.&lt;/p&gt;

&lt;p&gt;You can add node pools or services to existing clusters or add more clusters from the same or even a different cloud provider. The CLI will scaffold the additional required &lt;code&gt;.tf&lt;/code&gt; files and update the Dockerfile's &lt;code&gt;FROM&lt;/code&gt; line to specify the correct image, in case of changing from a single to a multi-cloud environment or vice versa. Likewise, it will also remove module calls and the respective &lt;code&gt;.tf&lt;/code&gt; files if you remove a service, a node pool or even a cluster from your platform.&lt;/p&gt;

&lt;p&gt;But don't worry, the &lt;code&gt;kbst&lt;/code&gt; CLI &lt;strong&gt;only changes local files&lt;/strong&gt; and does never change any cloud or Kubernetes resources.&lt;/p&gt;

&lt;p&gt;You can use it to avoid writing repetitive boilerplate code or manually deleting module calls and Terraform files, while still owning your codebase and retaining the ability to extend or modify the code to meet specific needs.&lt;/p&gt;

&lt;p&gt;Once you're happy with the code, you can follow the &lt;a href="https://www.kubestack.com/framework/documentation/gitops-process/"&gt;Kubestack GitOps workflow&lt;/a&gt; to peer-review, validate, and promote changes to your platform's environments as usual.&lt;/p&gt;

&lt;p&gt;In conclusion, the shift from Kubestack Cloud to the &lt;code&gt;kbst&lt;/code&gt; CLI provides a better developer experience not only on day one, but also on day two and makes it easier for platform engineering teams to manage their Kubernetes based platforms.&lt;/p&gt;

&lt;h2&gt;
  
  
  What happened to the platforms I designed with Kubestack Cloud?
&lt;/h2&gt;

&lt;p&gt;If you have previously designed a platform with Kubestack Cloud, you can sign in with your existing user and will see instructions how to scaffold your existing platforms using the new CLI.&lt;/p&gt;

&lt;p&gt;Here's an example screenshot of how that will look like.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BLqqqKSW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hr1gr9ms8yseak67cpek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BLqqqKSW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hr1gr9ms8yseak67cpek.png" alt="Screenshot of the Kubestack Cloud export" width="800" height="912"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>kubernetes</category>
      <category>platformengineering</category>
      <category>gitops</category>
    </item>
    <item>
      <title>Getting rigorous about investing in the Kubestack project</title>
      <dc:creator>Philipp Strube</dc:creator>
      <pubDate>Sun, 27 Nov 2022 14:52:30 +0000</pubDate>
      <link>https://forem.com/kubestack/getting-rigorous-about-investing-in-the-kubestack-project-4oj3</link>
      <guid>https://forem.com/kubestack/getting-rigorous-about-investing-in-the-kubestack-project-4oj3</guid>
      <description>&lt;p&gt;Sometimes when you spend a long time solving a problem, it makes it harder to see your solution clearly.&lt;/p&gt;

&lt;p&gt;In 12 years helping companies adopt modern cloud computing, I saw so many of the same snags repeating across multiple organizations. From these lessons, I built Kubestack as guardrails to make it easier to avoid the pain in the first place. Kubestack has been used in companies large and small for years, but I haven’t always known where it has been most helpful to its users. Without knowing this, it’s hard to bring more people in as both users and contributors. So earlier this year, I contracted with &lt;a href="https://www.anahevesi.com/" rel="noopener noreferrer"&gt;Ana Hevesi&lt;/a&gt; to support Kubestack's open source efforts.&lt;/p&gt;

&lt;p&gt;Ana operates a developer experience consultancy. After working in technical community building for companies like Stack Overflow and Nodejitsu, Ana now works with devtools founders to create evidence-based approaches for growing their ecosystems.&lt;/p&gt;

&lt;p&gt;We started by doing some research into how Kubestack serves your goals.&lt;/p&gt;

&lt;h2&gt;
  
  
  Research methods
&lt;/h2&gt;

&lt;p&gt;Our objective was to learn about users’ career trajectories and aspirations, and get a clear picture of what role Kubestack plays in your success.&lt;/p&gt;

&lt;p&gt;Ana recommended we aim for 5 interviews, citing it as a good “goldilocks zone” for initial quantity of data to work with. I then reached out to a spectrum of new and long-tenured Kubestack users to ask for their time in a 60 minute user interview.&lt;/p&gt;

&lt;p&gt;Ana wrote a standard interview script which included bandwidth for conversational “side quests.” After, Ana analyzed recordings, picked out repeat themes, and came to me with conclusions and recommendations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Areas of positive impact
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Kubestack helps careers
&lt;/h3&gt;

&lt;p&gt;Participants attributed their use of Kubestack to positive career outcomes, such as developing a reputation for reliably delivering for users, or scaling on a tight timeframe with limited prior experience. Others reported it was a key learning tool when they were just starting as platform engineers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Works so well it disappears
&lt;/h3&gt;

&lt;p&gt;The most consistent feedback we received was that users can assume Kubestack is just going to work. Multiple participants had been relying on the framework for many months without needing to give it a second thought.&lt;/p&gt;

&lt;h2&gt;
  
  
  Areas to improve
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Documentation for advanced features needs improvement
&lt;/h3&gt;

&lt;p&gt;Kubestack works great for months on end for most orgs setting up their first K8s cluster, but those who wished to modify Kubestack outside of existing use cases told us error messages and upgrade processes were opaque.&lt;/p&gt;

&lt;h3&gt;
  
  
  Backwards compatibility and multi-cloud support presents friction to open source contributions
&lt;/h3&gt;

&lt;p&gt;Adding new features requires working knowledge of both Terraform and cloud provider functionality across historical versions, and at times, their interactions with one another. Furthermore, while Kubestack is committed to supporting EKS, AKS, and GKE, a contributor may wish to implement functionality for only one of these cloud providers. Inviting more PRs from a wider array of contributors necessitates a plan for tiered support of legacy versions or defining contributor scope to accommodate this complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  How we’re applying these findings
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Connecting with the people who need us most
&lt;/h3&gt;

&lt;p&gt;Kubestack makes a huge impact on early-stage teams and emerging professionals. We’re exploring ways to better tailor our communication and outreach to make sure they know about the opportunities this framework provides, improving both adoption and contributions to the project. Kubestack only succeeds because you succeed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits before features
&lt;/h3&gt;

&lt;p&gt;The current iteration of Kubestack’s landing page assumes a fairly high level of existing knowledge of the platform engineering space. As such, an upcoming iteration of the Kubestack site will aim to engage folks who aren’t already deep in the jargon and progressively bring them into the fold, while still being legible to seasoned professionals.&lt;/p&gt;

&lt;h3&gt;
  
  
  Open source participation onramps
&lt;/h3&gt;

&lt;p&gt;Since enabling users to learn from each other and communicating where the project is going is an important part of growing an open source community, we’ll be experimenting with office hours and public communication about recent releases. We’ll have scheduling details coming soon.&lt;/p&gt;

&lt;h2&gt;
  
  
  Leveling up, together
&lt;/h2&gt;

&lt;p&gt;I created Kubestack so that folks coming to Kubernetes for the first time could take immediate advantage of the separation of concerns that containers provide. User research says that this works as intended!&lt;/p&gt;

&lt;p&gt;Now comes the iterative task of communicating my own knowledge and experience in ways that make it easier to build together, while learning from your use of the project to fill in its gaps. Ultimately, the intent is a healthy community where we’re all working together to make the project better serve your needs.&lt;/p&gt;

&lt;p&gt;Finally, a big thank you to Tomas, AJ, Brendan, Christoph, and Mark for your time and candor. Kubestack is better for it.&lt;/p&gt;

</description>
      <category>emptystring</category>
    </item>
    <item>
      <title>A Better Way to Provision Kubernetes Resources Using Terraform</title>
      <dc:creator>Philipp Strube</dc:creator>
      <pubDate>Wed, 04 May 2022 18:01:47 +0000</pubDate>
      <link>https://forem.com/kubestack/a-better-way-to-provision-kubernetes-resources-using-terraform-355n</link>
      <guid>https://forem.com/kubestack/a-better-way-to-provision-kubernetes-resources-using-terraform-355n</guid>
      <description>&lt;p&gt;Terraform is immensely powerful when it comes to defining and maintaining infrastructure as code. In combination with a declarative API, like a cloud provider API, it can determine, preview, and apply changes to the codified infrastructure.&lt;/p&gt;

&lt;p&gt;Consequently, it is common for teams to use Terraform to define the infrastructure of their Kubernetes clusters. And as a platform to build platforms, Kubernetes commonly requires a number of additional services before workloads can be deployed. Think of ingress controllers or logging and monitoring agents and so on. But despite Kubernetes' own declarative API, and the obvious benefits of maintaining a cluster's infrastructure and services from the same infrastructure as code repository, Terraform is far from the first choice to provision Kubernetes resources.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.kubestack.com/"&gt;Kubestack&lt;/a&gt;, the open-source Terraform framework I maintain, I'm on a mission to provide the best developer experience for teams working with Terraform and Kubernetes. And unified provisioning of all platform components, from cluster infrastructure to cluster services, is something I consider crucial in my relentless pursuit of said developer experience.&lt;/p&gt;

&lt;p&gt;Because of that, the two common approaches to provision Kubernetes resources using Terraform never really appealed to me.&lt;/p&gt;

&lt;p&gt;On the one hand, there's the Kubernetes provider. And while it integrates Kubernetes resources into Terraform, maintaining the Kubernetes resources in HCL is a lot of effort. Especially for Kubernetes YAML you consume from upstream. On the other hand, there are the Helm provider and the Kubectl provider. These two use native YAML instead of HCL, but do not integrate the Kubernetes resources into the Terraform state and, as a consequence, lifecycle.&lt;/p&gt;

&lt;p&gt;I believe my Kustomization provider based modules are a better alternative because of three distinct benefits:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Like Kustomize, the upstream YAML is left untouched, meaning upstream updates require minimal maintenance effort.&lt;/li&gt;
&lt;li&gt;By defining the Kustomize overlay in HCL, all Kubernetes resources are fully customizable using values from Terraform.&lt;/li&gt;
&lt;li&gt;Each Kubernetes resource is tracked individually in Terraform state, so diffs and plans show the changes to the actual Kubernetes resources.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To make these benefits less abstract, let's compare my Nginx ingress module with one using the Helm provider to provision Nginx ingress.&lt;/p&gt;

&lt;p&gt;The Terraform configuration for both examples is available in &lt;a href="https://github.com/kbst/terraform-helm-vs-kustomize"&gt;this repository&lt;/a&gt;. Let's take a look at the Helm module first.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Helm-based module
&lt;/h2&gt;

&lt;p&gt;Usage of the module is straightforward. First, configure the Kubernetes and Helm providers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;config_path&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~/.kube/config"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"helm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;kubernetes&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;config_path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~/.kube/config"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then define a kubernetes_namespace and call the release/helm module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes_namespace"&lt;/span&gt; &lt;span class="s2"&gt;"nginx_ingress"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;metadata&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"nginx_ingress"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-module/release/helm"&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2.7.0"&lt;/span&gt;

  &lt;span class="nx"&gt;namespace&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;kubernetes_namespace&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;repository&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://kubernetes.github.io/ingress-nginx"&lt;/span&gt;

  &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
    &lt;span class="nx"&gt;version&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"4.1.0"&lt;/span&gt;
    &lt;span class="nx"&gt;chart&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
    &lt;span class="nx"&gt;force_update&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="nx"&gt;wait&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="nx"&gt;recreate_pods&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="nx"&gt;deploy&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"replicaCount"&lt;/span&gt;
      &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you now run a terraform plan for this configuration, you see the resources to be created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;Terraform&lt;/span&gt; &lt;span class="nx"&gt;will&lt;/span&gt; &lt;span class="nx"&gt;perform&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;following&lt;/span&gt; &lt;span class="nx"&gt;actions&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;

  &lt;span class="c1"&gt;# kubernetes_namespace.nginx_ingress will be created&lt;/span&gt;
  &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes_namespace"&lt;/span&gt; &lt;span class="s2"&gt;"nginx_ingress"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;

      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;metadata&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;generation&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;resource_version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;uid&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;# module.nginx_ingress.helm_release.this[0] will be created&lt;/span&gt;
  &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"helm_release"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;atomic&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;chart&lt;/span&gt;                      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;cleanup_on_fail&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;create_namespace&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;dependency_update&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;disable_crd_hooks&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;disable_openapi_validation&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;disable_webhooks&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;force_update&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;                         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;lint&lt;/span&gt;                       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;manifest&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;max_history&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;metadata&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;                       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;namespace&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;recreate_pods&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;render_subchart_notes&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;replace&lt;/span&gt;                    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;repository&lt;/span&gt;                 &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://kubernetes.github.io/ingress-nginx"&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;reset_values&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;reuse_values&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;skip_crds&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;status&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"deployed"&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;timeout&lt;/span&gt;                    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;values&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;verify&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;version&lt;/span&gt;                    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"4.1.0"&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;wait&lt;/span&gt;                       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;wait_for_jobs&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;

      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"replicaCount"&lt;/span&gt;
          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;Plan&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;add&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;change&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;destroy&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And this is the key issue with how Helm is integrated into the Terraform workflow. The plan does not tell you what Kubernetes resources will be created for the Nginx ingress controller. And neither are the Kubernetes resources tracked in Terraform state, as shown by the apply output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubernetes_namespace&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creating&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;
&lt;span class="nx"&gt;kubernetes_namespace&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creation&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;ingress&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;nginx&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;helm_release&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;this&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creating&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;helm_release&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;this&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creation&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;ingress&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;nginx&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="nx"&gt;Apply&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt;&lt;span class="err"&gt;!&lt;/span&gt; &lt;span class="nx"&gt;Resources&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="nx"&gt;added&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;changed&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;destroyed&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, if planning a change, there's again no way to tell what the changes to the Kubernetes resources will be.&lt;/p&gt;

&lt;p&gt;So if you increase the &lt;code&gt;replicaCount&lt;/code&gt; value of the Helm chart, the terraform plan will merely show the change to the &lt;code&gt;helm_release&lt;/code&gt; resource.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"replicaCount"&lt;/span&gt;
    &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What will the changes to the Kubernetes resources be? And more importantly, is it a simple in-place update, or does it require a destroy-and-recreate? Looking at the plan, you have no way of knowing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;Terraform&lt;/span&gt; &lt;span class="nx"&gt;will&lt;/span&gt; &lt;span class="nx"&gt;perform&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;following&lt;/span&gt; &lt;span class="nx"&gt;actions&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;

  &lt;span class="c1"&gt;# module.nginx_ingress.helm_release.this[0] will be updated in-place&lt;/span&gt;
  &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"helm_release"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;id&lt;/span&gt;                         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
        &lt;span class="nx"&gt;name&lt;/span&gt;                       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
        &lt;span class="c1"&gt;# (27 unchanged attributes hidden)&lt;/span&gt;

      &lt;span class="err"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="err"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"replicaCount"&lt;/span&gt; &lt;span class="err"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;
          &lt;span class="err"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2"&lt;/span&gt; &lt;span class="err"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"replicaCount"&lt;/span&gt;
          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"3"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;Plan&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;add&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;change&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;destroy&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Kustomize-based module
&lt;/h2&gt;

&lt;p&gt;Now, let's take a look at the same steps for the Kustomize-based module. Usage is similar. First require the kbst/kustomization provider and configure it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;kustomization&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kbst/kustomization"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"kustomization"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;kubeconfig_path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~/.kube/config"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then call the nginx/kustomization module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"nginx_ingress"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kbst.xyz/catalog/nginx/kustomization"&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1.1.3-kbst.1"&lt;/span&gt;

  &lt;span class="nx"&gt;configuration_base_key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"default"&lt;/span&gt;
  &lt;span class="nx"&gt;configuration&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;replicas&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
        &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx-controller"&lt;/span&gt;
        &lt;span class="nx"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
      &lt;span class="p"&gt;}]&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Unlike for the Helm-based module though, when you run terraform plan now you will see each Kubernetes resource and its actual configuration individually. To keep this blog post palatable, I show the details for the namespace only.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;Terraform&lt;/span&gt; &lt;span class="nx"&gt;will&lt;/span&gt; &lt;span class="nx"&gt;perform&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;following&lt;/span&gt; &lt;span class="nx"&gt;actions&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;

  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p0["_/Namespace/_/ingress-nginx"] will be created&lt;/span&gt;
  &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"kustomization_resource"&lt;/span&gt; &lt;span class="s2"&gt;"p0"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;manifest&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jsonencode&lt;/span&gt;&lt;span class="err"&gt;(&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;apiVersion&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"v1"&lt;/span&gt;
              &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;kind&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Namespace"&lt;/span&gt;
              &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;metadata&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                  &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;annotations&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"app.kubernetes.io/version"&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"v0.46.0"&lt;/span&gt;
                      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"catalog.kubestack.com/heritage"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kubestack.com/catalog/nginx"&lt;/span&gt;
                      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"catalog.kubestack.com/variant"&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"base"&lt;/span&gt;
                    &lt;span class="p"&gt;}&lt;/span&gt;
                  &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;labels&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"app.kubernetes.io/component"&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-controller"&lt;/span&gt;
                      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"app.kubernetes.io/instance"&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
                      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"app.kubernetes.io/managed-by"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kubestack"&lt;/span&gt;
                      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"app.kubernetes.io/name"&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"nginx"&lt;/span&gt;
                    &lt;span class="p"&gt;}&lt;/span&gt;
                  &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="err"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["_/ConfigMap/ingress-nginx/ingress-nginx-controller"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["_/Service/ingress-nginx/ingress-nginx-controller"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["_/Service/ingress-nginx/ingress-nginx-controller-admission"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["_/ServiceAccount/ingress-nginx/ingress-nginx"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["_/ServiceAccount/ingress-nginx/ingress-nginx-admission"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["apps/Deployment/ingress-nginx/ingress-nginx-controller"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["batch/Job/ingress-nginx/ingress-nginx-admission-create"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["batch/Job/ingress-nginx/ingress-nginx-admission-patch"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["networking.k8s.io/IngressClass/_/nginx"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["rbac.authorization.k8s.io/ClusterRole/_/ingress-nginx"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["rbac.authorization.k8s.io/ClusterRole/_/ingress-nginx-admission"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["rbac.authorization.k8s.io/ClusterRoleBinding/_/ingress-nginx"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["rbac.authorization.k8s.io/ClusterRoleBinding/_/ingress-nginx-admission"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["rbac.authorization.k8s.io/Role/ingress-nginx/ingress-nginx"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["rbac.authorization.k8s.io/Role/ingress-nginx/ingress-nginx-admission"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["rbac.authorization.k8s.io/RoleBinding/ingress-nginx/ingress-nginx"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["rbac.authorization.k8s.io/RoleBinding/ingress-nginx/ingress-nginx-admission"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p2["admissionregistration.k8s.io/ValidatingWebhookConfiguration/_/ingress-nginx-admission"] will be created&lt;/span&gt;

&lt;span class="nx"&gt;Plan&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;19&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;add&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;change&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;destroy&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Applying, again, has all the individual Kubernetes resources. And because the modules use explicit &lt;code&gt;depends_on&lt;/code&gt; to handle namespaces and CRDs first and webhooks last, resources are reliably applied in the correct order.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kustomization_resource&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;p0&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"_/Namespace/_/ingress-nginx"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creating&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kustomization_resource&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;p0&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"_/Namespace/_/ingress-nginx"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creation&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;369&lt;/span&gt;&lt;span class="nx"&gt;e8643&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;ad33&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="nx"&gt;eb4&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;95&lt;/span&gt;&lt;span class="nx"&gt;dc&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;f506cef4a198&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kustomization_resource&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;p1&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"rbac.authorization.k8s.io/RoleBinding/ingress-nginx/ingress-nginx"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creating&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kustomization_resource&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;p1&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"batch/Job/ingress-nginx/ingress-nginx-admission-create"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creating&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;

&lt;span class="err"&gt;...&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kustomization_resource&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;p1&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"batch/Job/ingress-nginx/ingress-nginx-admission-patch"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creation&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;58346878&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;70&lt;/span&gt;&lt;span class="nx"&gt;bd&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="nx"&gt;f2&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;af61&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2730&lt;/span&gt;&lt;span class="nx"&gt;e3435ca7&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kustomization_resource&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;p1&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"_/ServiceAccount/ingress-nginx/ingress-nginx"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creation&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;f009bbb7&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="nx"&gt;d2e&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="nx"&gt;f28&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;a826&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;ce133c91cc15&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kustomization_resource&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;p2&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"admissionregistration.k8s.io/ValidatingWebhookConfiguration/_/ingress-nginx-admission"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creating&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kustomization_resource&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;p2&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"admissionregistration.k8s.io/ValidatingWebhookConfiguration/_/ingress-nginx-admission"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creation&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3185&lt;/span&gt;&lt;span class="nx"&gt;b09f&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;f67&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;4079&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;b44f&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;de01bff44bd2&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="nx"&gt;Apply&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt;&lt;span class="err"&gt;!&lt;/span&gt; &lt;span class="nx"&gt;Resources&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;19&lt;/span&gt; &lt;span class="nx"&gt;added&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;changed&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;destroyed&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Naturally, it also means that if you increase the replica count like this...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;replicas&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx-controller"&lt;/span&gt;
  &lt;span class="nx"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
&lt;span class="p"&gt;}]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;...the terraform plan shows which Kubernetes resources will change and what the diff is.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;Terraform&lt;/span&gt; &lt;span class="nx"&gt;will&lt;/span&gt; &lt;span class="nx"&gt;perform&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;following&lt;/span&gt; &lt;span class="nx"&gt;actions&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;

  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["apps/Deployment/ingress-nginx/ingress-nginx-controller"] will be updated in-place&lt;/span&gt;
  &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"kustomization_resource"&lt;/span&gt; &lt;span class="s2"&gt;"p1"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;id&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"81e8ff18-6c6c-440d-bd8b-bf5f0d016953"&lt;/span&gt;
      &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;manifest&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jsonencode&lt;/span&gt;&lt;span class="err"&gt;(&lt;/span&gt;
          &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;spec&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                  &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;replicas&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="err"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
                    &lt;span class="c1"&gt;# (4 unchanged elements hidden)&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="c1"&gt;# (3 unchanged elements hidden)&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="err"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;Plan&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;add&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;change&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;destroy&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Maybe more importantly even, the Kustomization provider will also correctly show if a resource can be changed using an in-place update. Or if a destroy-and-recreate is required because there is a change to an immutable field, for example.&lt;/p&gt;

&lt;p&gt;This is the result of two things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;That, as you've just seen, every Kubernetes resource is handled individually in Terraform state, and&lt;/li&gt;
&lt;li&gt;that the Kustomization provider uses Kubernetes' server-side dry-runs to determine the diff of each resource.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Based on the result of that dry-run, the provider instructs Terraform to create an in-place or a destroy-and-recreate plan.&lt;/p&gt;

&lt;p&gt;So, as an example of such a change, imagine you need to change &lt;code&gt;spec.selector.matchLabels&lt;/code&gt;. Since &lt;code&gt;matchLabels&lt;/code&gt; is an immutable field, you will see a plan that states that the Deployment resource must be replaced. And you will see 1 to add and 1 to destroy in the plan's summary.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;Terraform&lt;/span&gt; &lt;span class="nx"&gt;will&lt;/span&gt; &lt;span class="nx"&gt;perform&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;following&lt;/span&gt; &lt;span class="nx"&gt;actions&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;

  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["apps/Deployment/ingress-nginx/ingress-nginx-controller"] must be replaced&lt;/span&gt;
&lt;span class="err"&gt;-/+&lt;/span&gt; &lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"kustomization_resource"&lt;/span&gt; &lt;span class="s2"&gt;"p1"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"81e8ff18-6c6c-440d-bd8b-bf5f0d016953"&lt;/span&gt; &lt;span class="err"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
      &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;manifest&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jsonencode&lt;/span&gt;&lt;span class="err"&gt;(&lt;/span&gt;
          &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;metadata&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                  &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;labels&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;selector&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"example"&lt;/span&gt;
                        &lt;span class="c1"&gt;# (6 unchanged elements hidden)&lt;/span&gt;
                    &lt;span class="p"&gt;}&lt;/span&gt;
                    &lt;span class="nx"&gt;name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx-controller"&lt;/span&gt;
                    &lt;span class="c1"&gt;# (2 unchanged elements hidden)&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
              &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;spec&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                  &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;replicas&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="err"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
                  &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;selector&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                      &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;matchLabels&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;selector&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"example"&lt;/span&gt;
                            &lt;span class="c1"&gt;# (4 unchanged elements hidden)&lt;/span&gt;
                        &lt;span class="p"&gt;}&lt;/span&gt;
                    &lt;span class="p"&gt;}&lt;/span&gt;
                  &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;template&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                      &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;metadata&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                          &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;labels&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                              &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;selector&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"example"&lt;/span&gt;
                                &lt;span class="c1"&gt;# (4 unchanged elements hidden)&lt;/span&gt;
                            &lt;span class="p"&gt;}&lt;/span&gt;
                            &lt;span class="c1"&gt;# (1 unchanged element hidden)&lt;/span&gt;
                        &lt;span class="p"&gt;}&lt;/span&gt;
                        &lt;span class="c1"&gt;# (1 unchanged element hidden)&lt;/span&gt;
                    &lt;span class="p"&gt;}&lt;/span&gt;
                    &lt;span class="c1"&gt;# (2 unchanged elements hidden)&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="c1"&gt;# (2 unchanged elements hidden)&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="c1"&gt;# forces replacement&lt;/span&gt;
        &lt;span class="err"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;Plan&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;add&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;change&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;destroy&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Try it yourself
&lt;/h2&gt;

&lt;p&gt;You can find the &lt;a href="https://github.com/kbst/terraform-helm-vs-kustomize"&gt;source code&lt;/a&gt; for the comparison on GitHub if you want to experiment with the differences yourself.&lt;/p&gt;

&lt;p&gt;If you want to try the Kustomize modules yourself, you can either use one of the modules from the catalog that bundle upstream YAML, like the &lt;a href="https://www.kubestack.com/catalog/prometheus"&gt;Prometheus operator&lt;/a&gt;, &lt;a href="https://www.kubestack.com/catalog/cert-manager"&gt;Cert-Manager&lt;/a&gt;, &lt;a href="https://www.kubestack.com/catalog/sealed-secrets"&gt;Sealed secrets&lt;/a&gt;, or &lt;a href="https://www.kubestack.com/catalog/tektoncd"&gt;Tekton&lt;/a&gt;, for example.&lt;/p&gt;

&lt;p&gt;But this doesn't only work for upstream services. There is also a module that can be used to provision any Kubernetes YAML in the exact same way as the catalog modules - called the &lt;a href="https://www.kubestack.com/framework/documentation/cluster-service-modules#custom-manifests"&gt;custom manifest module&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get involved
&lt;/h2&gt;

&lt;p&gt;Currently, the number of services available from the catalog is still limited.&lt;/p&gt;

&lt;p&gt;If you want to get involved, you can also find the &lt;a href="https://github.com/kbst/catalog"&gt;catalog source on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@garri?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Vladislav Babienko&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/options?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>kubernetes</category>
      <category>platform</category>
      <category>devops</category>
    </item>
    <item>
      <title>Deploying Prometheus Operator via the Kubestack Catalog</title>
      <dc:creator>Josh Maxwell</dc:creator>
      <pubDate>Wed, 15 Dec 2021 13:33:17 +0000</pubDate>
      <link>https://forem.com/kubestack/deploying-prometheus-operator-via-the-kubestack-catalog-4dp</link>
      <guid>https://forem.com/kubestack/deploying-prometheus-operator-via-the-kubestack-catalog-4dp</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This walkthrough assumes you have already followed parts &lt;a href="https://www.kubestack.com/framework/documentation/tutorial-develop-locally"&gt;one&lt;/a&gt;, &lt;a href="https://www.kubestack.com/framework/documentation/tutorial-provision-infrastructure"&gt;two&lt;/a&gt;, and &lt;a href="https://www.kubestack.com/framework/documentation/tutorial-setup-automation"&gt;three&lt;/a&gt; of the official Kubestack tutorial and at least have a local development cluster running via &lt;code&gt;kbst local apply&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In this walkthrough we will explore using the Kubestack Catalog to install a &lt;a href="https://www.kubestack.com/catalog/prometheus"&gt;Prometheus Operator&lt;/a&gt; and collect metrics from an example Go application.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Major Disclaimer:&lt;/strong&gt;&lt;br&gt;
In the interest of time and reducing technical complexity there is a very strong anti-pattern present in this walkthrough.&lt;/p&gt;

&lt;p&gt;Best practice dictates that infrastructure and application manifests be stored in separate repositories so they can be worked on and deployed independently.&lt;/p&gt;

&lt;p&gt;In this walkthrough both the Go application and the Kubestack infrastructure manifests we create will be stored in the same repository for simplicity sake while deploying to the local development environment (see the Conclusion for an explanation of how this would look following best practices).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  1 - Configure Local Development Environment
&lt;/h2&gt;

&lt;p&gt;Before we install the Prometheus Operator, we need a few additional tools installed in our local development environment in order more easily verify that our configuration is working.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.1 - Install Go Locally
&lt;/h3&gt;

&lt;p&gt;Follow the instructions at &lt;a href="https://go.dev/doc/install"&gt;https://go.dev/doc/install&lt;/a&gt; to install Go for your local development environment. We need this to build our example application as well as to install another tool below.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.2 - Install and Configure &lt;code&gt;kubectl&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;kubectl&lt;/code&gt; will be used primarily for two reasons. First, to verify which resources are deployed in our k8s cluster. Second, to forward ports and access resources inside the k8s cluster from our local development environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(&amp;lt;kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point if &lt;code&gt;kubectl&lt;/code&gt; is successfully installed you should see output similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:41:28Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're having issues installing &lt;code&gt;kubectl&lt;/code&gt; please refer the &lt;a href="https://kubernetes.io/docs/tasks/tools/"&gt;offical documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.3 - Install and Configure &lt;code&gt;kind&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;kind&lt;/code&gt; (Kubernetes in Docker) will be used to export the cluster configuration file that &lt;code&gt;kubectl&lt;/code&gt; needs to access the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GO111MODULE="on" go get sigs.k8s.io/kind@v0.11.1
kind version
kind get clusters
kind export kubeconfig --name &amp;lt;CLUSTER_NAME&amp;gt;
kubectl get namespaces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point if &lt;code&gt;kind&lt;/code&gt; has been installed correctly and the &lt;code&gt;kubectl config&lt;/code&gt; has been exported successfully, you should see something similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                  STATUS   AGE
default               Active   6h27m
ingress-nginx         Active   6h26m
kube-node-lease       Active   6h27m
kube-public           Active   6h27m
kube-system           Active   6h27m
local-path-storage    Active   6h27m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're having issues installing &lt;code&gt;kind&lt;/code&gt; please refer the &lt;a href="https://kind.sigs.k8s.io/docs/user/quick-start/"&gt;official documentation&lt;/a&gt;. Otherwise, congrats! You're ready to move on to the next part of the tutorial. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;NOTE: &lt;code&gt;kind&lt;/code&gt; is only needed for local development environments since that is how Kubestack deploys your environment locally. If you want to follow the rest of the tutorial using your cloud environment instead, you will need to download the &lt;code&gt;kubectl config&lt;/code&gt; file from that cluster and import it locally so &lt;code&gt;kubectl&lt;/code&gt; can access that cluster instead. Here are some resources about exporting that configuration file: &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html"&gt;EKS&lt;/a&gt;, &lt;a href="https://docs.microsoft.com/en-us/azure/aks/control-kubeconfig-access"&gt;AKS&lt;/a&gt;, &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl"&gt;GKE&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  2 - Deploy an Example Go Application
&lt;/h2&gt;

&lt;p&gt;Now we're going to create an example application to emit some metrics for us to collect with Prometheus.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.1 - Create a Go Application
&lt;/h3&gt;

&lt;p&gt;Create a new GitHub repository to host your Example Go Application.&lt;/p&gt;

&lt;p&gt;Create a new Go Module in the repository with &lt;code&gt;go mod init github.com/&amp;lt;account&amp;gt;/&amp;lt;repo&amp;gt;&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Create a main.go file in the repository with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
    "net/http"
    "time"

    "github.com/prometheus/client_golang/prometheus"
    "github.com/prometheus/client_golang/prometheus/promauto"
    "github.com/prometheus/client_golang/prometheus/promhttp"
)

func recordMetrics() {
    go func() {
        for {
            opsProcessed.Inc()
            time.Sleep(2 * time.Second)
        }
    }()
}

var (
    opsProcessed = promauto.NewCounter(prometheus.CounterOpts{
        Name: "app_go_prom_processed_ops_total",
        Help: "The total number of processed events",
    })
)

func main() {
    println("Starting app-go-prom on port :2112")

    recordMetrics()

    http.Handle("/metrics", promhttp.Handler())
    http.ListenAndServe(":2112", nil)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With main.go created run &lt;code&gt;go mod tidy&lt;/code&gt; to create &lt;code&gt;go.mod&lt;/code&gt; and &lt;code&gt;go.sum&lt;/code&gt; (these files should be commited to the repo).&lt;/p&gt;

&lt;h3&gt;
  
  
  2.2 - Create a Dockerfile that Runs Go Application
&lt;/h3&gt;

&lt;p&gt;Create a Dockerfile in the repository with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM golang:1.17-alpine

WORKDIR /app

COPY go.mod ./
COPY go.sum ./
RUN go mod download

COPY *.go ./
RUN go build -o /app-go-prom

EXPOSE 2112

CMD [ "/app-go-prom" ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2.3 - Create a GitHub Action Pipeline to Build and Publish Docker Image
&lt;/h3&gt;

&lt;p&gt;Create a .github/workflows/docker-publish.yml file in the repository with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Docker Publish

on:
  push:
    branches: [ main ]
    tags: [ 'v*' ]
  pull_request:
    branches: [ main ]

env:
  REGISTRY: ghcr.io
  # github.repository as &amp;lt;account&amp;gt;/&amp;lt;repo&amp;gt;
  IMAGE_NAME: ${{ github.repository }}


jobs:
  build:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write

    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      # https://github.com/docker/login-action
      - name: Log into registry ${{ env.REGISTRY }}
        if: github.event_name != 'pull_request'
        uses: docker/login-action@v1
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      # https://github.com/docker/metadata-action
      - name: Extract Docker metadata
        id: meta
        uses: docker/metadata-action@v3
        with:
          images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}

      # https://github.com/docker/build-push-action
      - name: Build and push Docker image
        uses: docker/build-push-action@v2
        with:
          context: .
          push: ${{ github.event_name != 'pull_request' }}
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will build and publish a docker image to the GitHub Registry associated to the repository you created. This image can be used inside your k8s cluster.&lt;/p&gt;

&lt;p&gt;In order to trigger this to build an image tagged with &lt;code&gt;latest&lt;/code&gt; (rather than the current branch) you will need to tag the commit similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout main
git pull
git tag v0.1.1
git push origin v0.1.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the pipeline completes you should be able verify by pulling the image with &lt;code&gt;docker pull ghcr.io/&amp;lt;account&amp;gt;/&amp;lt;repo&amp;gt;:latest&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.4 - Create an Application Manifest for Kubestack to Deploy in the Cluster
&lt;/h3&gt;

&lt;p&gt;This requires two files to be created in your Kubestack IAC repository:&lt;/p&gt;

&lt;p&gt;eks_zero_applications.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "application_custom_manifests" {
  providers = {
    kustomization = kustomization.eks_zero
  }

  source  = "kbst.xyz/catalog/custom-manifests/kustomization"
  version = "0.1.0"

  configuration = {
    apps = {

      resources = [
        "${path.root}/manifests/applications/app-go-prom.yaml"
      ]

      common_labels = {
        "env" = terraform.workspace
      }
    }
    ops = {}
    loc = {}
  }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;manifests/applications/app-go-prom.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment 
metadata:
  name: app-go-prom
  namespace: default
  labels:
    app: app-go-prom
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-go-prom
  template:
    metadata:
      labels:
        app: app-go-prom
    spec:
      containers:
        - image: ghcr.io/&amp;lt;account&amp;gt;/&amp;lt;repo&amp;gt;:latest
          name: app-go-prom
          ports:
            - containerPort: 2112

---

apiVersion: v1
kind: Service
metadata:
  name: app-go-prom-svc
  namespace: default
  labels:
    app: app-go-prom
spec:
  selector:
    app: app-go-prom
  ports:
    - name: metrics
      port: 2112
      targetPort: 2112
      protocol: TCP
  type: LoadBalancer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With these files created you may need to ^c out of the &lt;code&gt;kbst local apply&lt;/code&gt; and run it again if watch doesn't pick up changes in custom manifest files.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.5 - Verify that the Go Application is Running and Emitting Metrics
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Note about this section:&lt;br&gt;
If you ever destroy and re-apply the local cluster you will need to run the &lt;code&gt;kind export kubeconfig --name &amp;lt;CLUSTER_NAME&amp;gt;&lt;/code&gt; command from above again to get a fresh &lt;code&gt;kubectl config&lt;/code&gt; in order for &lt;code&gt;kubectl&lt;/code&gt; to work. The cluster name will probably be the same, but if you need to find it you can always run &lt;code&gt;kind get clusters&lt;/code&gt; to figure it out.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once &lt;code&gt;kbst&lt;/code&gt; has finished applying the changes let's verify that the pod is running and that it is emitting metrics as we expect.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see output similar to the following if everything is working correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                            READY   STATUS    RESTARTS   AGE
app-go-prom-6f9576879d-hvdr9    1/1     Running   0          32h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If STATUS is not "Running" there is an error. You can use &lt;code&gt;kubectl logs &amp;lt;go-app-pod-name&amp;gt;&lt;/code&gt; to check the pod logs and fix any errors.&lt;/p&gt;

&lt;p&gt;Once your Go application pod is running, lets forward the port from the associated service to our localhost and check for metrics:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get services
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the service name in hand run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward service/app-go-prom-svc 2112
curl localhost:2112/metrics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see a bunch of metrics spit out at this point, including the one we created in our Go application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# HELP app_go_prom_processed_ops_total The total number of processed events
# TYPE app_go_prom_processed_ops_total counter
app_go_prom_processed_ops_total 25058
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you run the curl command a few more times you should see our metric increasing steadily as we expect.&lt;/p&gt;

&lt;p&gt;You can stop the port forward and move on to the next section now.&lt;/p&gt;

&lt;h2&gt;
  
  
  3 - Install Prometheus Operator via the Kubestack Catalog
&lt;/h2&gt;

&lt;p&gt;Now we are going to install the Prometheus Operator &lt;a href="https://www.kubestack.com/catalog/prometheus"&gt;following the instructions&lt;/a&gt; in the Kubestack Catalog.&lt;/p&gt;

&lt;p&gt;This consists of 3 steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Adding the Prometheus Operator module to the cluster&lt;/li&gt;
&lt;li&gt;Configuring read-only access policies to monitoring targets&lt;/li&gt;
&lt;li&gt;Specifying which target services to monitor&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  3.1 - Adding the Prometheus Operator module to the cluster
&lt;/h3&gt;

&lt;p&gt;Create an eks_zero_services.tf file in the root of the repo with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "eks_zero_prometheus" {
  providers = {
    kustomization = kustomization.eks_zero
  }

  source  = "kbst.xyz/catalog/prometheus/kustomization"
  version = "0.51.1-kbst.0"

  configuration = {
    apps = {
      additional_resources = [
        "${path.root}/manifests/services/prometheus-default-instance.yaml",
        "${path.root}/manifests/services/prometheus-service-monitors.yaml"
      ]
    }
    ops = {}
    loc = {}
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file specifies to add the &lt;code&gt;eks_zero_prometheus&lt;/code&gt; module to the &lt;code&gt;eks_zero&lt;/code&gt; kustomization provider. If you have customized the name of your provider make sure to update that here as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.2 - Create a Default Instance with permissions to monitor targets
&lt;/h3&gt;

&lt;p&gt;Now we will create first file referenced as &lt;code&gt;additional_resources&lt;/code&gt; above.&lt;/p&gt;

&lt;p&gt;Create manifests/services/prometheus-default-instance.yaml with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  name: default-instance
  namespace: default
  labels:
    prometheus: default-instance
spec:
  serviceAccountName: prometheus-default-instance
  serviceMonitorSelector:
    matchLabels:
      prometheus-instance: default-instance
  resources:
    requests:
      memory: 2Gi

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: prometheus-default-instance
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus-instance
subjects:
- kind: ServiceAccount
  name: prometheus-default-instance
  namespace: default

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus-default-instance
  namespace: default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file contains 3 key components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Prometheus default-instance&lt;/li&gt;
&lt;li&gt;The RoleBinding permissions&lt;/li&gt;
&lt;li&gt;The Service Account&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The default-instance is our Prometheus server that collects the metrics and serves the Prometheus UI.&lt;br&gt;
The other components are used to grant the needed permissions to read the metrics we'll specify in the next section.&lt;/p&gt;
&lt;h3&gt;
  
  
  3.3 - Specify which targets to monitor
&lt;/h3&gt;

&lt;p&gt;Finally, we will create a ServiceMonitor that ties everything together.&lt;/p&gt;

&lt;p&gt;Create the manifests/services/prometheus-service-monitors.yaml file with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: app-go-prom-monitor
  namespace: default
  labels:
    prometheus-instance: default-instance
spec:
  selector:
    matchLabels:
      app: app-go-prom
  endpoints:
  - port: metrics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are 3 important pieces here to connect everything:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;metadata.labels&lt;/code&gt; here needs to exactly match the &lt;code&gt;spec.serviceMonitorSelector.matchLabels&lt;/code&gt; from the default-instance&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;spec.selector.matchLables&lt;/code&gt; here needs to exactly match the &lt;code&gt;metadata.labels&lt;/code&gt; of the Deployment from the Go application manifest&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;spec.endpoints.port&lt;/code&gt; needs to match the &lt;code&gt;spec.ports.name&lt;/code&gt; of the Service from the Go application manifest&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once all those pieces are in place you can once again run &lt;code&gt;kbst local apply&lt;/code&gt; to pickup the new manifests.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.4 - Verify all Prometheus components are Running
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Note about this section:&lt;br&gt;
If you ever destroy and re-apply the local cluster you will need to run the &lt;code&gt;kind export kubeconfig --name &amp;lt;CLUSTER_NAME&amp;gt;&lt;/code&gt; command from above again to get a fresh &lt;code&gt;kubectl config&lt;/code&gt; in order for &lt;code&gt;kubectl&lt;/code&gt; to work. The cluster name will probably be the same, but if you need to find it you can always run &lt;code&gt;kind get clusters&lt;/code&gt; to figure it out.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Similar to how we verified our Go application was working let's check our Prometheus Operator.&lt;/p&gt;

&lt;p&gt;Once &lt;code&gt;kbst&lt;/code&gt; has finished applying the changes let's verify that the Prometheus Operator and supporting components have been successfully deployed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get all --namespace operator-prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see output similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                       READY   STATUS    RESTARTS   AGE
pod/prometheus-operator-775545dc6b-qffng   1/1     Running   0          40h

NAME                          TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/prometheus-operator   ClusterIP   None         &amp;lt;none&amp;gt;        8080/TCP   40h

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/prometheus-operator   1/1     1            1           40h

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/prometheus-operator-775545dc6b   1         1         1       40h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's check that the default-instance pod is running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see there is now a &lt;code&gt;prometheus-default-instance-0&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                            READY   STATUS    RESTARTS   AGE
app-go-prom-6f9576879d-hvdr9    1/1     Running   0          33h
prometheus-default-instance-0   2/2     Running   0          33h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If STATUS is not "Running" there is an error. You can use &lt;code&gt;kubectl logs prometheus-default-instance-0&lt;/code&gt; to check the pod logs and fix any errors.&lt;/p&gt;

&lt;p&gt;Lastly, let's verify that our ServiceMonitor was created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get ServiceMonitors
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see something similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                  AGE
app-go-prom-monitor   33h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you don't have any errors, proceed to the final section and let's verify that Prometheus is correctly collecting our metrics.&lt;/p&gt;

&lt;h2&gt;
  
  
  4 - View the Metrics in the Prometheus UI
&lt;/h2&gt;

&lt;p&gt;Now that we've got everything deployed and Running let's take one final step to verify that everything is working.&lt;/p&gt;

&lt;p&gt;Like we did before with our Go application, we now need to forward the Prometheus port to access the UI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward prometheus-default-instance-0 9090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now from your local development environment open a web browser and navigate to &lt;code&gt;localhost:9090&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If everything has been successful to this point you should be greeted with the Prometheus dashboard.&lt;/p&gt;

&lt;p&gt;Enter the metric name into the search box that we created in our Go application; &lt;code&gt;app_go_prom_processed_ops_total&lt;/code&gt;, and click "Execute".&lt;br&gt;
You will see the metric metadata and count displayed below the search box, similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app_go_prom_processed_ops_total{container="app-go-prom", endpoint="metrics", instance="10.244.1.4:2112", job="app-go-prom-svc", namespace="default", pod="app-go-prom-6f9576879d-hvdr9", service="app-go-prom-svc"}  474
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations you have successfully deploy the Prometheus Operator, created an example service emitting metrics, and configured everything to collect those metrics. That is the backbone you'll need to have visibility into the metrics of your new cluster.&lt;/p&gt;

&lt;p&gt;From here you could extend your metrics infrastructure by;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;adding additional applications / services and their associated ServiceMonitors,&lt;/li&gt;
&lt;li&gt;adding a Grafana deployment to create dashboards of your metrics,&lt;/li&gt;
&lt;li&gt;configuring an external instance of Prometheus to collect all your metrics,&lt;/li&gt;
&lt;li&gt;etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The next steps if you'd like would be to go beyond metrics and browse the &lt;a href="https://www.kubestack.com/catalog"&gt;Kubestack Catalog&lt;/a&gt; to install additional helpful services into your cluster.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Initial Disclaimer Explained:&lt;/p&gt;

&lt;p&gt;As mentioned in the beginning, we introduced a very strong anti-pattern in this walkthrough by placing our application and infrastructure manifests in the same repository.&lt;/p&gt;

&lt;p&gt;This should &lt;strong&gt;NEVER&lt;/strong&gt; be done when deploying to your real cloud infrastructure. Instead, for micro-service architectures such as this, each application would have their own code repo (in whatever language is appropriate). There would also be one additional "deployment repository" containing the Kubernetes manifests of all the applications. A service such as ArgoCD or Flux would then be configured to monitor the deployment repository and deploy changes to the Kubernetes cluster as needed when the applications are updated.&lt;/p&gt;

&lt;p&gt;The Prometheus Operator should be deployed as part of the Kubestack infrastructure. The Prometheus Instance and ServiceMonitor (all explained in more details later) should be deployed along side each application. The only exception would be if you plan to have a single Instance monitor all your services, in that case it can be deployed as part of the Kubestack infrastructure.&lt;/p&gt;

&lt;p&gt;For more information you can refer to the official documentation regarding &lt;a href="https://www.kubestack.com/framework/documentation/gitops-process#infrastructure-environments"&gt;infrastructure environments&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>gitops</category>
      <category>kubernetes</category>
      <category>devrel</category>
      <category>terraform</category>
    </item>
    <item>
      <title>What Terraform can learn from PHP</title>
      <dc:creator>Philipp Strube</dc:creator>
      <pubDate>Mon, 08 Feb 2021 10:30:47 +0000</pubDate>
      <link>https://forem.com/kubestack/what-terraform-can-learn-from-php-4e65</link>
      <guid>https://forem.com/kubestack/what-terraform-can-learn-from-php-4e65</guid>
      <description>&lt;p&gt;TL;DR:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Writing infrastructure as code shows many of the same challenges as writing code for application development, because many of these challenges are not language or use-case specific.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; and its surrounding ecosystem are still evolving and share many similarities with early PHP and the web. Just like PHP evolved by learning from other language ecosystems, Terraform can as well.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use-case specific frameworks are a major driver of innovation, improved developer experience and productivity on the application development side. But are not yet established parts of the infrastructure as code ecosystem.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The paradigm shift to containers and &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; made use-case specific frameworks possible for infrastructure as code by providing a powerful abstraction between application and infrastructure layer. And the cloud native community is evolving rapidly, extending this abstraction to additional use-cases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Organizations that adopted application development frameworks for their improved developer experience and productivity, can leverage the same benefits for automating Kubernetes by using an &lt;a href="https://www.kubestack.com/"&gt;infrastructure as code framework&lt;/a&gt; and avoid leaving the cluster the weakest link in their GitOps automation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Learning from other language ecosystems
&lt;/h2&gt;

&lt;p&gt;PHP’s ease of getting started is widely quoted as the boon and bane of the language. It seems as if making fun of the spaghetti code bases of the early PHP days never gets old. Even in 2021. But there is no doubt that PHP is an extremely successful programming language. &lt;/p&gt;

&lt;p&gt;You may ask, what does this have to do with Terraform? Well, hear me out. Terraform and PHP have more in common than you may think. PHP was created when the web was in its infancy and quickly became extremely popular. Don’t forget, PHP is the P in LAMP stack. Similarly, infrastructure as code is still an emerging ecosystem today, and Terraform is by far the most popular language in this ecosystem.&lt;/p&gt;

&lt;p&gt;But the modern PHP of today is vastly different from the early PHP we all like to make fun of. And since Terraform today is so similar to where PHP was when it started, there’s a good chance that the Terraform community can learn a lot from how PHP evolved.&lt;/p&gt;

&lt;p&gt;Rasmus Lerdorf, the creator of PHP, is famously &lt;a href="https://en.wikipedia.org/wiki/PHP#cite_note-itconversations-21"&gt;quoted&lt;/a&gt; as never having intended to write a programming language. But PHP got popular and they had to keep going. In addition, the web and its request-response model were new, even to experienced developers. But the endless possibilities of the web got people excited, and the unintentional programming language PHP was easy to get started with. This combination led to the stereotypical poor quality code bases that ended up powering major parts of the early web.&lt;/p&gt;

&lt;p&gt;Similarly, infrastructure as code offers huge benefits and gets people excited as well. But it also requires both operations and coding experience, and people coming from either one background have to learn a lot about the respective other, before they can be fully productive.&lt;/p&gt;

&lt;p&gt;Languages like Python released a few years before PHP, or Ruby and Java, which were released in the same year as PHP, were intentionally designed programming languages for professional use. While not specific to the web, it is of course possible to build web applications in either one of them. So the self-evident thing was to use these more mature and consistent languages to build web applications, and have more easily maintainable code bases as a result.&lt;/p&gt;

&lt;p&gt;And not only were the languages more mature, but so were their ecosystems. The majority of challenges, developers face when writing code, are not language specific. And many are not even use-case specific. You may need different dependencies for building a web application instead of a desktop application for example. But in both cases having dependency management is greatly useful. A feature Python, Ruby and Java all already had.&lt;/p&gt;

&lt;p&gt;This led to the creation of frameworks like &lt;a href="https://www.djangoproject.com/"&gt;Django&lt;/a&gt;, &lt;a href="https://rubyonrails.org/"&gt;Ruby on Rails&lt;/a&gt; or &lt;a href="https://spring.io/"&gt;Spring&lt;/a&gt; that made it easy to build web applications in Python, Ruby or Java respectively, leveraging their existing language ecosystems.&lt;/p&gt;

&lt;p&gt;A great idea that works in one ecosystem, however is quick to inspire similar development in other languages. And PHP’s wide adoption easily justified major investments to improve the PHP core as well as the surrounding ecosystem. All those teams looking for the best way to maintain their growing PHP code bases were smart to look at other languages and how these same challenges were solved there.&lt;/p&gt;

&lt;p&gt;The result are frameworks like &lt;a href="https://symfony.com/"&gt;Symfony&lt;/a&gt; or &lt;a href="https://cakephp.org/"&gt;CakePHP&lt;/a&gt;, heavily inspired by Spring and Rails respectively. This is also how Composer brought modern dependency management to PHP. And last but not least, this was when the PHP community adopted Git for version control and slowly moved away from just editing production files directly via FTP.&lt;/p&gt;

&lt;h2&gt;
  
  
  It’s all about the code
&lt;/h2&gt;

&lt;p&gt;Let's get back to infrastructure as code. Yes, in a lot of ways automating infrastructure is different from application development. But many of the challenges of writing code, that applied across languages and use-cases on the software development side, also apply to infrastructure as code. Code is kind of the keyword here.&lt;/p&gt;

&lt;p&gt;So just like PHP learned from other languages, their frameworks and their tooling, Terraform can only benefit from doing so as well.&lt;/p&gt;

&lt;p&gt;One area where Hashicorp, the makers of Terraform, recently made major improvements is dependency management. Terraform had the ability to download required providers for quite some time. But it was limited to only Hashicorp’s own providers. Community maintained providers required involving, manual installation. A recent Terraform release introduced support for registry namespaces, which means community providers can now also be installed from the official registry. In addition, required providers and versions can now be &lt;a href="https://www.terraform.io/upgrade-guides/0-13.html#explicit-provider-source-locations"&gt;specified more explicitly&lt;/a&gt;. Even including the ability to vendor providers, and thereby hardening automation runs against failing when the registry is unavailable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The missing piece
&lt;/h2&gt;

&lt;p&gt;All the language ecosystems we discussed share one key piece that heavily improves the developer experience, but which isn’t a thing yet in the infrastructure as code world. I’m referring to frameworks of course. And concretely use-case specific frameworks. By being use-case specific, the aforementioned software development frameworks drastically reduce upfront and maintenance effort, and provide the best developer experience and workflow possible.&lt;/p&gt;

&lt;p&gt;If I’m building a cloud native application in Java, using Spring Boot will make my life much easier. Likewise, if my goal is to build a Jamstack website, a framework like &lt;a href="https://www.gatsbyjs.com/"&gt;Gatsby&lt;/a&gt; will get me there much faster.&lt;/p&gt;

&lt;p&gt;But the reason why frameworks are not a thing in the infrastructure as code world yet is not merely that the ecosystem is still evolving. For frameworks to be useful, we also required a strong abstraction layer that kept the infrastructure layer clear from application specific requirements. Containers and Kubernetes are extremely popular because they provide this very abstraction. And this means two things: First, that with using Terraform to manage Kubernetes there is a popular and very specific use-case for an infrastructure as code framework. And second, that because of the powerful abstraction, such a framework makes sense for the first time.&lt;/p&gt;

&lt;p&gt;Kubestack is this use-case specific, &lt;a href="https://www.kubestack.com/"&gt;Terraform GitOps framework&lt;/a&gt;. If you’re building GitOps automation for Kubernetes cluster infrastructure and cluster services using Terraform, Kubestack may be the framework for you. Think of Kubestack as the Ruby on Rails of infrastructure automation, the Gatsby of GitOps, or the Spring Boot of Terraform and Kubernetes.&lt;/p&gt;

&lt;p&gt;And just like application frameworks copied ideas that worked well from one language to another, Kubestack does the same from application development to infrastructure as code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Talent borrows, genius steals
&lt;/h2&gt;

&lt;p&gt;One example is Kubestack’s convention over configuration based repository layout. Another one is its inheritance based configuration to prevent drift between environments. A third one is the ability to easily vendor dependencies in the repository, like the Nginx ingress controller or Prometheus monitoring operator. Or, as the last but not the least example, local development environments that automatically update as you make changes to the code.&lt;/p&gt;

&lt;p&gt;Slow feedback loops are poison for developer productivity. And infrastructure as code is notoriously known for mandatory, slow pipeline runs. This makes the local development environment the perfect example how Kubestack drastically improves the developer experience, because it’s a use-case specific framework.&lt;/p&gt;

&lt;p&gt;The strong abstraction between the application and infrastructure layers is a key mantra of what we know as cloud native. And if you take a look at recent developments from the cloud native community the direction is clear. As more and more organizations shift their workloads and use-cases to cloud native, we continue to see new innovation and iterative improvements that extend this powerful abstraction.&lt;/p&gt;

&lt;p&gt;This is both positive for the future of infrastructure as code and Terraform as well as for use-case specific infrastructure as code frameworks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform loves cloud native
&lt;/h2&gt;

&lt;p&gt;Systems that provide a separation between declaring desired state and current state are the current state-of-the-art. This is a core principle of Kubernetes and high-level managed cloud services, but also of VM auto-scaling groups, as a lower level example of this principle. On the surface there’s an API to declare the desired state. And behind the API are control loops that keep the current state in sync with the desired state.&lt;/p&gt;

&lt;p&gt;Terraform shines when being combined with such a system, because it is great at planning and applying changes triggered by a commit in a repository. And it can also be run periodically, to detect drift and either alert or overwrite. But when operating distributed systems, there are various failure scenarios where continuously running controllers, that can take immediate action based on more events than just code changes, are clearly superior. The important thing to understand here is, Terraform is great to provide a way for teams to reason about proposed changes and keeping the committed state and desired state in sync. But keeping desired and current state in sync is, in most cases, better left to a continuously running control loop.&lt;/p&gt;

&lt;p&gt;It’s common for teams to hit this limitation when using infrastructure as code to automate legacy systems that don’t provide this separation of concerns. And this frequently leads to automation that only manages the lifecycle partially and causes complex issues for teams to coordinate automation and manual operations. Facing this significantly limits the value of infrastructure as code, and many teams justifiably may hold back on adopting Terraform for this very reason.&lt;/p&gt;

&lt;p&gt;But Kubernetes or managed cloud services are not the only systems that rely on declared desired state and reconciliation loops to keep current state in sync. An example doing this for infrastructure automation outside the cloud provider’s walled gardens is &lt;a href="https://cluster-api.sigs.k8s.io/#why-build-cluster-api"&gt;ClusterAPI&lt;/a&gt;. This cloud native community initiative aims to provide the same separation across on-premise and cloud. And through integration into vSphere, ClusterAPI is readily available to VMware’s vast installed base.&lt;/p&gt;

&lt;h2&gt;
  
  
  The future of infrastructure is code
&lt;/h2&gt;

&lt;p&gt;As an industry, we’re clearly heading into one direction. And as we continue to adopt this paradigm, the limitations that held infrastructure as code back, when working with legacy systems, do not apply any more. As infrastructure as code becomes more viable for more organizations, more teams can benefit from use-case specific frameworks to get the best possible developer experience and productivity.&lt;/p&gt;

&lt;p&gt;Already now, many teams are using Terraform successfully. Yes, there are edge cases to consider and there is a steep learning curve, no matter if your background is in operations or software development. But as the cloud native ecosystem continues to evolve, the benefits of infrastructure as code will be applicable to more teams and more use-cases and just like PHP grew by learning from other language ecosystems, Terraform will too.&lt;/p&gt;

&lt;p&gt;As far as Kubernetes is concerned, if you’re already adopting GitOps, the Kubestack framework is an opportunity to implement &lt;a href="https://www.kubestack.com/"&gt;full-stack GitOps&lt;/a&gt; that covers both the cluster infrastructure and cluster services and not just the application workloads on the cluster. This way, you can avoid having the foundation of your system, the cluster, be the weakest link by not managing it manually via UI.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>devops</category>
      <category>programming</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Localhost EKS development environments with EKS-D and Kubestack</title>
      <dc:creator>Philipp Strube</dc:creator>
      <pubDate>Tue, 01 Dec 2020 19:04:32 +0000</pubDate>
      <link>https://forem.com/kubestack/localhost-eks-development-environments-with-eks-d-and-kubestack-4p6</link>
      <guid>https://forem.com/kubestack/localhost-eks-development-environments-with-eks-d-and-kubestack-4p6</guid>
      <description>&lt;p&gt;Today Amazon announced EKS Distro, or EKS-D for short. A Kubernetes distribution making the same release artifacts used by Amazon EKS available to everyone.&lt;/p&gt;

&lt;p&gt;This allows teams to use the exact same bits and pieces that power EKS, to build clusters for anything from integration tests to on-premise use-cases. As a launch partner, I got access to EKS-D in advance to integrate it into Kubestack’s local development environments.&lt;/p&gt;

&lt;p&gt;Kubestack is about providing the best &lt;a href="https://www.kubestack.com/" rel="noopener noreferrer"&gt;GitOps developer experience for Terraform and Kubernetes&lt;/a&gt;, from local development, all the way to production.&lt;/p&gt;

&lt;p&gt;Because I believe platform engineers automating Kubernetes deserve the same great developer experience that application engineers building applications on top of Kubernetes already have.&lt;/p&gt;

&lt;p&gt;To achieve this, the Kubestack framework integrates all the moving pieces from Terraform providers, to resources, and modules into a GitOps workflow ready for day-2 operations.&lt;/p&gt;

&lt;p&gt;On top of the reliable automation to propose, validate and promote infrastructure changes, Kubestack is focused on giving platform teams a modern developer experience to iterate quickly using local development environments.&lt;/p&gt;

&lt;p&gt;Now, whenever Kubestack simulates an EKS cluster locally, it uses EKS-D to do so. Let’s take a look at how Kubestack’s infrastructure automation from local development to production works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Local Development
&lt;/h2&gt;

&lt;p&gt;Imagine you’re tasked with provisioning the Prometheus operator to deploy a Prometheus instance and configuring it to scrape the metrics from your team’s application for each environment.&lt;/p&gt;

&lt;p&gt;If you’re like me, it may take a few iterations to get the label and namespace selectors in the Prometheus resource just right and configure the RBAC for the Prometheus instance’s service account. Especially RBAC is notorious for taking a bit of trial and error for many folks to get right.&lt;/p&gt;

&lt;p&gt;Using Kubestack’s local development environment, you can iterate on the exact same manifests that will later be used in production. The local development environment automatically updates as you make changes, and provides immediate feedback, without waiting minutes for CI/CD pipeline runs every time. It’s just like in the infamous &lt;a href="https://xkcd.com/303/" rel="noopener noreferrer"&gt;XKCD comic&lt;/a&gt;, except it’s applying, not compiling.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimgs.xkcd.com%2Fcomics%2Fcompiling.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimgs.xkcd.com%2Fcomics%2Fcompiling.png" alt="applying, not compiling, but you get the idea"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All you have to do to get started on this task is change into your checkout of the infrastructure repository and run one &lt;code&gt;kbst&lt;/code&gt; CLI command. Then you’re all set to work on the Prometheus manifests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kbst local apply
...
Switched to workspace "loc".
...
Apply complete! Resources: 14 added, 0 changed, 0 destroyed.
2020/11/18 12:55:53 #### Watching for changes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The new EKS-D integration means the local environment is now even closer to the EKS production environment. And fewer differences between environments reduce the risk that promoting a change fails. This is also why Kubestack uses inheritance between environments. Differences are sometimes necessary, but configuration inheritance makes them explicit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Environment Promotion
&lt;/h2&gt;

&lt;p&gt;Eventually, I’ll have the monitoring setup working locally and it’s time to push my changes and ask for a peer-review. This is the first step where the &lt;a href="https://www.kubestack.com/framework/documentation/gitops-process#making-changes" rel="noopener noreferrer"&gt;Kubestack GitOps workflow&lt;/a&gt; kicks in.&lt;/p&gt;

&lt;p&gt;There are two things to review to decide if you want to apply this change. Your code changes of course, and the &lt;code&gt;terraform plan&lt;/code&gt; provided by Kubestack’s pipeline for every branch.&lt;/p&gt;

&lt;p&gt;If the reviewers requires changes, you can push additional commits to the branch and the pipeline will run &lt;code&gt;terraform plan&lt;/code&gt; again. When your team approved, merge the change into master.&lt;/p&gt;

&lt;p&gt;This triggers the pipeline and applies the merged changes to the ops environment. A &lt;code&gt;terraform plan&lt;/code&gt; is not enough to ensure that the changes will apply correctly. That’s why Kubestack uses the ops environment, to validate the configuration change against real cloud infrastructure. The ops environment does not run applications, so that teams can feel confident to merge infrastructure changes at any time, without worrying about blocking team members or breaking applications.&lt;/p&gt;

&lt;p&gt;Finally, if the change to ops applied successfully, the pipeline will additionally provide a &lt;code&gt;terraform plan&lt;/code&gt; to show the required changes for the apps environment. The additional plan helps teams decide if they want to promote this change into the apps environment now.&lt;/p&gt;

&lt;p&gt;Having a reliable workflow is crucial for teams to trust their automation. Kubestack, by combining purpose built Terraform modules and its proven triggers helps teams build infrastructure automation that is ready for day-2 operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Started
&lt;/h2&gt;

&lt;p&gt;If you’ve made it this far and want to learn more, you can get started with the &lt;a href="https://www.kubestack.com/framework/documentation/tutorial-get-started" rel="noopener noreferrer"&gt;Kubestack framework by following the tutorial&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>kubernetes</category>
      <category>gitops</category>
    </item>
  </channel>
</rss>
