<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Roberth Strand</title>
    <description>The latest articles on Forem by Roberth Strand (@roberthstrand).</description>
    <link>https://forem.com/roberthstrand</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/roberthstrand"/>
    <language>en</language>
    <item>
      <title>Automate your Terraform using GitOps with Flux</title>
      <dc:creator>Roberth Strand</dc:creator>
      <pubDate>Thu, 29 Dec 2022 23:00:00 +0000</pubDate>
      <link>https://forem.com/roberthstrand/automate-your-terraform-using-gitops-with-flux-3233</link>
      <guid>https://forem.com/roberthstrand/automate-your-terraform-using-gitops-with-flux-3233</guid>
      <description>&lt;p&gt;GitOps as a workflow is perfect for application delivery, mostly used in Kubernetes environments, but it is also possible to use for infrastructure. In a typical GitOps scenario, you might want to look at solutions like Crossplane as a Kubernetes-native alternative, while most traditional infrastructure are still used with CI/CD pipelines. There are several benefits of creating your deployment platform with Kubernetes as the base, but it also means that more people would have to have that particular skill set. One of the benefits of an Infrastructure-as-Code tool like Terraform is that it is easy to learn, and doesn’t require much specialized knowledge.&lt;/p&gt;

&lt;p&gt;When building our platform services, we wanted everyone to be able to contribute. Most, if not all, of our engineers use Terraform on a daily basis, and know how to create Terraform modules that can be used in several scenarios and for several customers. While there are several ways of automating Terraform, we would like to utilize a proper GitOps workflow as much as possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does the Terraform controller work
&lt;/h2&gt;

&lt;p&gt;While searching for alternatives for running Terraform using Kubernetes, I found several controllers and operators, but none that I felt had as much potential as the &lt;a href="https://github.com/weaveworks/tf-controller/"&gt;tf-controller from Weaveworks&lt;/a&gt;. We are already using Flux as our GitOps tool, and the tf-controller works by utilizing some of the core functionality from Flux, and has a custom resource for Terraform deployments. The source controller takes care of fetching our modules, the kustomize controllers apply the Terraform resources, and then the controller spin up static pods (called runners) that runs your Terraform commands.&lt;/p&gt;

&lt;p&gt;The Terraform resource looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infra.contrib.fluxcd.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;helloworld&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flux-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1m&lt;/span&gt;
  &lt;span class="na"&gt;approvePlan&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;auto&lt;/span&gt;
  &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./terraform/module&lt;/span&gt;
  &lt;span class="na"&gt;sourceRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GitRepository&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;helloworld&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flux-system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are a few things to note on the specs here. The interval in the spec controls how often the controller starts up the runner pods, which then performs &lt;code&gt;terraform plan&lt;/code&gt; on your root module, which is defined by the path parameter.&lt;/p&gt;

&lt;p&gt;We also see that this particular resource is set to automatically approve any plans, which means that if there is a difference between the plan and the current state of the target system, a new runner will run to apply the changes automatically. This makes the process as “GitOps” as possible, but you can disable this. If you did, you would have to manually approve plans, either by using the Terraform Controller CLI, or by updating your manifests with a reference to the commit which should be applied. For more details, see the &lt;a href="https://docs.gitops.weave.works/docs/terraform/Using%20Terraform%20CRD/provision/#manually-apply-resources"&gt;documentation&lt;/a&gt; on manual approval.&lt;/p&gt;

&lt;p&gt;Like I mentioned earlier, the tf-controller utilizes the source controller from Flux. The &lt;code&gt;sourceRef&lt;/code&gt; attribute is used to define which source resource we want to use, just like a Flux Kustomization resource would.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced deployments
&lt;/h2&gt;

&lt;p&gt;While the example above works, it’s not the type of deployment we would normally do. When not defining a backend storage the state would get stored in the cluster, which is fine for testing and development, but for production we prefer that the state file is stored somewhere outside the cluster. We don’t want this defined in the root module directly, as we want to reuse our root modules in several deployments, so we have to define our backend in our Terraform resource.&lt;/p&gt;

&lt;p&gt;Here is an example of how we set up a custom backend configurations. You can find all available backends in the &lt;a href="https://developer.hashicorp.com/terraform/language/settings/backends/configuration"&gt;Terraform docs&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infra.contrib.fluxcd.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;helloworld&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flux-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;backendConfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;customConfiguration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;backend "azurerm" {&lt;/span&gt;
          &lt;span class="s"&gt;resource_group_name  = "rg-terraform-mgmt"&lt;/span&gt;
          &lt;span class="s"&gt;storage_account_name = "stgextfstate"&lt;/span&gt;
          &lt;span class="s"&gt;container_name       = "tfstate"&lt;/span&gt;
          &lt;span class="s"&gt;key                  = "helloworld.tfstate"&lt;/span&gt;
        &lt;span class="s"&gt;}&lt;/span&gt;
  &lt;span class="s"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For us, storing the state file outside the cluster means that we can redeploy our cluster but have no storage dependency. There is no need for backup, or state migration. As soon as the new cluster is up, it runs the commands against the same state, and we are back in business.&lt;/p&gt;

&lt;p&gt;Another advanced move is dependencies between modules. Sometimes we design deployments like a two-stage rocket, where one deployment sets up certain resources that the next one use. In these scenarios, we need to make sure that our Terraform is written in such a fashion so that we output any data needed as inputs for the second module, and ensure that the first module has a successful run first.&lt;/p&gt;

&lt;p&gt;These two examples are from code used while demonstrating dependencies, and all code can be found on my &lt;a href="https://github.com/roberthstrand/gitops-terraform/tree/main"&gt;GitHub&lt;/a&gt;. Some of the resource is omitted for brevity’s sake.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infra.contrib.fluxcd.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared-resources&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flux-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s"&gt;...&lt;/span&gt;
  &lt;span class="s"&gt;writeOutputsToSecret&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared-resources-output&lt;/span&gt;
  &lt;span class="s"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infra.contrib.fluxcd.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;workload01&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flux-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s"&gt;...&lt;/span&gt;
  &lt;span class="s"&gt;dependsOn&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared-resources&lt;/span&gt;
    &lt;span class="s"&gt;...&lt;/span&gt;
  &lt;span class="na"&gt;varsFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared-resources-output&lt;/span&gt;
  &lt;span class="s"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the deployment that I call &lt;em&gt;shared-resources&lt;/em&gt;, we see that I defined a secret where the outputs from the deployment should be stored. In this case, the outputs are the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"subnet_id"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_virtual_network&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;base&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;subnet&lt;/span&gt;&lt;span class="p"&gt;.*.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"resource_group_name"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;base&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, in the &lt;em&gt;workload01&lt;/em&gt; deployment, we first define our dependency with the &lt;code&gt;dependsOn&lt;/code&gt; attribute, which then makes sure that &lt;em&gt;shared-resources&lt;/em&gt; has a successful run before scheduling &lt;em&gt;workload01&lt;/em&gt;. The outputs from &lt;em&gt;shared-resources&lt;/em&gt; is then used as inputs in &lt;em&gt;workload01&lt;/em&gt;, which is the reason why we want it to wait.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the controller pattern and not pipelines or Terraform Cloud
&lt;/h2&gt;

&lt;p&gt;The most common approach to automating Terraform is either by using CI/CD pipelines or Terraform Cloud. Using pipelines for Terraform works fine, but usually ends up with us copying pipeline definitions over and over again. There are solutions to that, but by using the tf-controller we have a much more declarative approach to defining what we want our deployments to look like, rather than defining the steps in an imperative fashion.&lt;/p&gt;

&lt;p&gt;Terraform Cloud has introduced a lot of features that overlaps with using the GitOps workflow, but using the tf-controller does not exclude you from using Terraform Cloud. You could use Terraform Cloud as the backend for your deployment, only automating the runs through the tf-controller.&lt;/p&gt;

&lt;p&gt;The reason for us using this approach is that we already deploy applications using GitOps, and we have much more flexibility as to how we can offer these capabilities as a service. We can control our implementation through APIs, making self-service more accessible to both our operators and end-users. Details around our platform approach is such a big topic, that we will have to return to that in its own blog post.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resources
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Terraform Controller: &lt;a href="https://github.com/weaveworks/tf-controller"&gt;GitHub&lt;/a&gt;, &lt;a href="https://docs.gitops.weave.works/docs/terraform/get-started/"&gt;Documentation&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/roberthstrand/gitops-terraform/tree/main"&gt;Example deployments&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://youtu.be/MlsbC9v8fxY"&gt;YouTube&lt;/a&gt;, How to achieve (actual) GitOps with Terraform and Kubernetes - Cloud Native and Kubernetes Oslo Meetup&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>gitops</category>
      <category>platformengineering</category>
      <category>terraform</category>
    </item>
    <item>
      <title>2022 was a great year for GitOps</title>
      <dc:creator>Roberth Strand</dc:creator>
      <pubDate>Tue, 20 Dec 2022 11:28:58 +0000</pubDate>
      <link>https://forem.com/roberthstrand/2022-was-a-great-year-for-gitops-59a</link>
      <guid>https://forem.com/roberthstrand/2022-was-a-great-year-for-gitops-59a</guid>
      <description>&lt;p&gt;Adoption of GitOps is still going strong, and this year we have seen two of the most used GitOps project graduate, and several &lt;a href="https://opengitops.dev/events"&gt;GitOps events&lt;/a&gt; throughout the year. On a personal note, one of my highlights was becoming a &lt;a href="https://github.com/open-gitops/project/pull/112"&gt;maintainer&lt;/a&gt; on the OpenGitOps project.&lt;/p&gt;

&lt;p&gt;For us, GitOps is a vital part of how we operate, and it is the magic sauce that fuels our platform offering. Not only do we use it for application deployments, but by utilizing the Weaveworks &lt;a href="https://github.com/weaveworks/tf-controller/"&gt;tf-controller&lt;/a&gt;, we can create services using Terraform to automate our infrastructure deployments.&lt;/p&gt;

&lt;p&gt;If you want to catch up on the current state of GitOps, there are many hours of videos from events held this year.&lt;/p&gt;

&lt;p&gt;Notable YouTube playlists:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=NPKcpGpx1HQ&amp;amp;list=PLj6h78yzYM2PVniTC7pKpHx1KsYjsOJnJ"&gt;GitOpsCon North America 2022&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=LQgsxT3SlN8&amp;amp;list=PLj6h78yzYM2PTHsP7RhbRYBT_TDJz5x3M"&gt;GitOpsCon Europe 2022&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=qCDWsIcFU-A&amp;amp;list=PL9lTuCFNLaD0NVkR17tno4X6BkxsbZZfr"&gt;GitOps Days 2022&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/playlist?list=PLj6h78yzYM2MbKazKesjAx4jq56pnz1XE"&gt;ArgoCon 2022&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Graduating projects
&lt;/h2&gt;

&lt;p&gt;We are excited that both the Flux and Argo project has been promoted to graduation tier in the Cloud Native Computing Foundation (CNCF), a massive achievement for GitOps! &lt;/p&gt;

&lt;p&gt;New projects come in as sandbox projects, before getting accepted as incubating projects and finally become graduated projects. To move between these tiers, there are several criteria to ensure that the projects adhere to the foundation’s standards, that it is vendor neutral and has a healthy number of contributors.&lt;/p&gt;

&lt;h3&gt;
  
  
  Graduation announcements:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.cncf.io/announcements/2022/11/30/flux-graduates-from-cncf-incubator/"&gt;Flux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cncf.io/announcements/2022/12/06/the-cloud-native-computing-foundation-announces-argo-has-graduated/"&gt;Argo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Related links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.techtarget.com/searchitoperations/news/252528152/GitOps-hits-stride-as-CNCF-graduates-Flux-CD-and-Argo-CD"&gt;Tech Target - GitOps hits stride (Featuring quote from Roberth Strand)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.weave.works/press/releases/weaveworks-gitops-project-flux-graduates-in-the-cncf/"&gt;Weaveworks press release (Featuring quote from Roberth Strand)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Where to get started
&lt;/h2&gt;

&lt;p&gt;If you are wondering how to get started, or what a proper GitOps workflow is, we will be writing more about this in the near future. If you want to get help with GitOps or platform engineering in general, you are welcome to &lt;a href="https://www.amestofortytwo.com/contact-us"&gt;contact us&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>gitops</category>
      <category>platformengineering</category>
    </item>
    <item>
      <title>AKS HTTP Application Routing issues with newer Kubernetes Ingress</title>
      <dc:creator>Roberth Strand</dc:creator>
      <pubDate>Mon, 20 Sep 2021 20:11:45 +0000</pubDate>
      <link>https://forem.com/roberthstrand/aks-http-application-routing-issues-with-newer-kubernetes-ingress-3a39</link>
      <guid>https://forem.com/roberthstrand/aks-http-application-routing-issues-with-newer-kubernetes-ingress-3a39</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FYiCXqU0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1528485238486-507af7c0aa19%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DMnwxMTc3M3wwfDF8c2VhcmNofDI2fHxjbG91ZCUyMGNvbmZ1fGVufDB8fHx8MTYzMjE2ODYwMg%26ixlib%3Drb-1.2.1%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FYiCXqU0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1528485238486-507af7c0aa19%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DMnwxMTc3M3wwfDF8c2VhcmNofDI2fHxjbG91ZCUyMGNvbmZ1fGVufDB8fHx8MTYzMjE2ODYwMg%26ixlib%3Drb-1.2.1%26q%3D80%26w%3D2000" alt="AKS HTTP Application Routing issues with newer Kubernetes Ingress" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was working with a client where they used the HTTP Application Routing addon for AKS, which basically just creates a DNS sone with a fancy generated domain and an NGINX Ingress Controller. Obviously not what you want to use for production workloads but it's great if you're creating tests deployments and just want an easy Ingress that is completely automatic.&lt;/p&gt;

&lt;p&gt;Well, it would be great if we got it to work. I have never used this addon before but there were nothing special steps needed when I read through the docs but for some reason it was not working.&lt;/p&gt;

&lt;p&gt;Turns out, the Service Account used to update some of the internal components did not have enough access. I tried both using Azure RBAC and normal Kubernetes RBAC but still not working. When creating a new Ingress it should get updated with the external IP pretty quickly but that never happened:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Gc1vGK-H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.robstr.dev/content/images/2021/09/image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Gc1vGK-H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.robstr.dev/content/images/2021/09/image.png" alt="AKS HTTP Application Routing issues with newer Kubernetes Ingress" width="800" height="32"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see here I got a couple of Ingress that have existed for almost an hour with no address set. The class was also empty, but that was set through annotations.&lt;/p&gt;

&lt;p&gt;When I checked the logs for the Ingress Controller I see these messages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;W0920 17:43:32.171988 7 status.go:288] error updating ingress rule: ingresses.networking.k8s.io "cm-acme-http-solver-99lwz" is forbidden: User "system:serviceaccount:kube-system:addon-http-application-routing-nginx-ingress-serviceaccount" cannot update resource "ingresses/status" in API group "networking.k8s.io" in the namespace "ris-pullrequest1628": Azure does not have opinion for this user.
W0920 17:43:32.172332 7 status.go:288] error updating ingress rule: ingresses.networking.k8s.io "cm-acme-http-solver-slr4j" is forbidden: User "system:serviceaccount:kube-system:addon-http-application-routing-nginx-ingress-serviceaccount" cannot update resource "ingresses/status" in API group "networking.k8s.io" in the namespace "ris-dev": Azure does not have opinion for this user.
W0920 17:43:32.366825 7 status.go:288] error updating ingress rule: ingresses.networking.k8s.io "cm-acme-http-solver-5ffk2" is forbidden: User "system:serviceaccount:kube-system:addon-http-application-routing-nginx-ingress-serviceaccount" cannot update resource "ingresses/status" in API group "networking.k8s.io" in the namespace "rstest01": Azure does not have opinion for this user.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which then lead me to the ClusterRole created by Azure for this addon:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LFj63dCz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.robstr.dev/content/images/2021/09/image-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LFj63dCz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.robstr.dev/content/images/2021/09/image-2.png" alt="AKS HTTP Application Routing issues with newer Kubernetes Ingress" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We see that we have the right to update status for Ingresses, but only using the deprecated APIGroup &lt;em&gt;extensions/v1beta1&lt;/em&gt; which is going to be removed in version 1.22. When we are using Cert-Manager, this is a problem as the newer versions creates the stable versions Ingress resources for ACME so even if we updated the code to use the old API Group we still wouldn't get it to work.&lt;/p&gt;

&lt;p&gt;After reading up on the Addon Manager, I saw that there was an label called &lt;em&gt;addonmanager.kubernetes.io/mode=Reconcile&lt;/em&gt;. I could remove these, but that would alter that process and from my experience that usually just ends up in unexpected behavior. So what I tried next was creating my own role and role that I could add the service account to.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: addon-http-app-routing-fix
rules:
- apiGroups:
  - "networking.k8s.io"
  resources: 
  - "ingresses/status"
  verbs:
  - "update"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: addon-http-app-routing-fix-clusterrolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: addon-http-app-routing-fix
subjects:
  - kind: ServiceAccount
    name: addon-http-application-routing-nginx-ingress-serviceaccount
    namespace: kube-system
---
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This worked pretty well, but one problem remains. The DNS zone doesn't get updated based on Ingress like it should. When I check the rights for the service account used for this, it too was referencing the old APIGroup... So, I'm updating the rules to allow the role to check out Ingress objects and bind the role to that service account as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: addon-http-app-routing-fix
rules:
- apiGroups:
  - "networking.k8s.io"
  resources: 
  - "ingresses/status"
  verbs:
  - "update"
- apiGroups:
  - "networking.k8s.io"
  resources:
    - "ingresses"
  verbs:
    - "get"
    - "watch"
    - "list"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: addon-http-app-routing-fix-clusterrolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: addon-http-app-routing-fix
subjects:
  - kind: ServiceAccount
    name: addon-http-application-routing-nginx-ingress-serviceaccount
    namespace: kube-system
  - kind: ServiceAccount
    name: addon-http-application-routing-external-dns
    namespace: kube-system
---
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that solves it. Obviously as we're closing in on the removal of the old beta APIs, this should have been solved by now but apparently not.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aks</category>
      <category>azure</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Securing Azure Kubernetes Service</title>
      <dc:creator>Roberth Strand</dc:creator>
      <pubDate>Sun, 22 Aug 2021 12:54:41 +0000</pubDate>
      <link>https://forem.com/roberthstrand/securing-azure-kubernetes-service-1p1p</link>
      <guid>https://forem.com/roberthstrand/securing-azure-kubernetes-service-1p1p</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fYg85R0D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1610122299048-8ea105316c89%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DMnwxMTc3M3wwfDF8c2VhcmNofDEyfHxzZWN1cmV8ZW58MHx8fHwxNjI2MDAwNjA0%26ixlib%3Drb-1.2.1%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fYg85R0D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1610122299048-8ea105316c89%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DMnwxMTc3M3wwfDF8c2VhcmNofDEyfHxzZWN1cmV8ZW58MHx8fHwxNjI2MDAwNjA0%26ixlib%3Drb-1.2.1%26q%3D80%26w%3D2000" alt="Securing Azure Kubernetes Service" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the fallacies that people conjure up when they start to use public cloud offerings, is that they get a fully secure solutions out of the box. People seem to think this about everything from email to infrastructure in Azure. Yes, doing nothing at all with what you get from cloud vendors is probably better than if you set up the same on your own and don't do anything about security, but it is still your responsibility to decide how secure you need to be.&lt;/p&gt;

&lt;p&gt;No matter what you run, or where, you need to be conscious of &lt;em&gt;your&lt;/em&gt; security posture.&lt;/p&gt;

&lt;p&gt;When running Kubernetes clusters in Microsoft Azure, you get a lot of freebies out of the box. The images on the node computers are kept up to date, the control plane is not something you have to deal with, no etcd backups, and of course you can easily upgrade your entire cluster to a new version of Kubernetes. But other than that, you still have Kubernetes running and you have to secure it.&lt;/p&gt;

&lt;p&gt;In this post I want to highlight what I think is at least a good starting point for securing AKS, but security is a never ending project. Remember that the more security you want, the more time, effort and money is going into it. If you need any more help, feel free to reach out to me on &lt;a href="https://twitter.com/roberthtweets"&gt;Twitter&lt;/a&gt; or &lt;a href="https://linkedin.com/in/roberthstrand"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Azure Policy
&lt;/h3&gt;

&lt;p&gt;This one is so important, and a great "Azure-like" security implementation. There are two policy initiatives (group of policies) that are built in, and at the time of writing over 40 different policies for Kubernetes. You can easily enable Azure Policy for AKS on cluster creation, as well as afterwards. Read more about how to use this add-on in the &lt;a href="https://docs.microsoft.com/azure/governance/policy/concepts/policy-for-kubernetes?WT.mc_id=AZ-MVP-5004348"&gt;Microsoft docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;These policies work by utilizing &lt;em&gt;Open Policy Agent&lt;/em&gt; (OPA), which is used to create policies that a is integrated with the admission controller as well as a tool called the OPA Gatekeeper.&lt;/p&gt;

&lt;p&gt;Going back to the Azure policies, you really should go through and see what is available. I usually suggest that one can start by looking at the two initiatives available and decide which is the best fit for the level of security one is aiming for, then look at the other policies. The two initiates are called &lt;em&gt;baseline&lt;/em&gt; and &lt;em&gt;restricted&lt;/em&gt; standards, which obviously reflects the level of security that they offer.&lt;/p&gt;

&lt;p&gt;I have gone a little bit more into details on the baseline standard and how to keep your clusters compliant in &lt;a href="https://dev.to/roberthstrand/aks-and-azure-policy-baseline-standards-making-clusters-compliant-59b9"&gt;this blog post&lt;/a&gt;, and another is coming for the restricted one. But, the concept is the same for both just with a couple of extra policies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Restricted access to the Kubernetes API
&lt;/h3&gt;

&lt;p&gt;By default, the Kubernetes API is open to the Internet if you create an AKS cluster. In production clusters you should not allow everyone to access the API from everywhere and the first step is to restrict what IP ranges that is allowed.&lt;/p&gt;

&lt;p&gt;You can follow the guide at &lt;a href="https://docs.microsoft.com/azure/aks/api-server-authorized-ip-ranges?WT.mc_id=AZ-MVP-5004348"&gt;docs.microsoft.com&lt;/a&gt; to see how you can do this through the az cli, or set the &lt;a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#api_server_authorized_ip_ranges"&gt;api_server_authorized_ip_ranges&lt;/a&gt; attribute if you deploying through Terraform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrate RBAC with Azure AD
&lt;/h3&gt;

&lt;p&gt;This is very important one, and one that I feel should absolutely be the de facto standard when running AKS. Instead of dealing with creating users and roles yourself, offload this to Azure AD and make it part of that governance model.&lt;/p&gt;

&lt;p&gt;Integrating with Azure AD is relatively easy, and can be done by following the steps described &lt;em&gt;&lt;a href="https://docs.microsoft.com/azure/aks/managed-aad?WT.mc_id=AZ-MVP-5004348"&gt;here&lt;/a&gt;&lt;/em&gt;. The only thing you really need is a group that can be used to give cluster admin access, and that's about it. However, creating a cluster that is integrated into Azure AD or updating a cluster to use Azure AD means that you cannot go back.&lt;/p&gt;

&lt;h3&gt;
  
  
  Handling secrets
&lt;/h3&gt;

&lt;p&gt;When running applications and services in Kubernetes, we need to create, use and manage secrets. The normal way of doing this is keeping it in etcd through the resource with the imaginative name of &lt;em&gt;secret&lt;/em&gt;. However these secrets are available for all to see if they have the right permissions, and they are stored in base64 encoded text. Obviously, one should make sure that only the certain people can read these but even then it's not a really great way of managing secrets.&lt;/p&gt;

&lt;p&gt;This have been a discussion for a long time, and one of the solutions that the Kubernetes community has come up with is allowing people to use the standard called Container Storage Interface (CSI) with the Secrets Store Driver. So instead of mounting secrets in pods as a volume from etcd, we instead could mount secrets from any CSI compliant source. For instance, Microsoft is developing the Azure Key Vault provider for CSI. Read more on how to enable this in your cluster on &lt;a href="https://docs.microsoft.com/en-us/azure/aks/csi-secrets-store-driver?WT.mc_id=AZ-MVP-5004348"&gt;docs.microsoft.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This is still in preview and there are other options out there but looking at how much the community is behind this way of handling secrets it's hard to think that it won't be the norm going forward. This same setup can be used for HashiCorp Vault, which I talk about in the next section.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrate with Azure Monitor, Log Analytics and Container Insight
&lt;/h3&gt;

&lt;p&gt;Azure comes with some pretty powerful monitoring tools, and the integration for AKS is very good. There is a lot of information that you can access through both metrics and logs, and if you enable container insights with your cluster you can even scrape Prometheus metrics without installing Prometheus. Obviously, you can run your own Prometheus if you want to but this gives you the option not to.&lt;/p&gt;

&lt;p&gt;Monitoring AKS is a big topic, and for now I just have to urge you to read up on it on &lt;a href="https://docs.microsoft.com/en-us/azure/aks/monitor-aks?WT.mc_id=AZ-MVP-5004348"&gt;docs.microsoft.com&lt;/a&gt;. I will write more on this topic, or even an entire series on it, but for now just know that this is something you need to get very familiar with and that there are decision to be made.&lt;/p&gt;

&lt;h2&gt;
  
  
  Third-party tools
&lt;/h2&gt;

&lt;p&gt;There is a lot of security tools that can be run with Kubernetes, and the list is growing. Some you deploy in the cluster and some interacts through pipelines, or even directly through CLI. It would be beneficial to take a look at what is available in the community, see what people use to make sure that the run environment is secure as well as the delivery toolchain.&lt;/p&gt;

&lt;p&gt;Even though the tools that I mention solves a certain problem, and are great at that, there are several tools doing the same and you might find another that fits your needs better. This short list could go on forever so I just picked some that I have been working with lately that I think people should know about.&lt;/p&gt;

&lt;h3&gt;
  
  
  kube-bench
&lt;/h3&gt;

&lt;p&gt;The &lt;em&gt;Center for Internet Security&lt;/em&gt; (CIS) does a lot to help secure IT, and one thing they do is research they collect in these so-called &lt;em&gt;benchmark&lt;/em&gt;. These benchmarks comes in the form of  a document patterns and recommendation, as well as how you can check that your cluster adhere to these recommendations. Going through all of this can be a long process, so the fine folks at Aqua Security made the tool kube-bench. This tool is not something that you run continuously, but it's more of a way to check that you are still doing things right, based on the CIS benchmarks.&lt;/p&gt;

&lt;p&gt;You can find the tool &lt;a href="https://github.com/aquasecurity/kube-bench"&gt;here&lt;/a&gt;, where they also have a quick start guide. But since we're specifically discussing AKS here, I recommend that you read up on the tool and then check out how to run the tool directly on the worker nodes &lt;a href="https://github.com/aquasecurity/kube-bench/blob/main/docs/running.md#running-in-an-aks-cluster"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kured
&lt;/h3&gt;

&lt;p&gt;The Kubernetes Reboot Daemon (Kured) is a tool created by the fine folks at &lt;a href="https://weave.works"&gt;Weaveworks&lt;/a&gt; to help with that problem we have had for ages, reboot after certain patches. We want most of our processes automated, and when running AKS we get automatically security patching of our worker nodes. Sometimes, as we all know, these patches require a reboot and that is something that AKS doesn't do for us.&lt;/p&gt;

&lt;p&gt;Kured works by looking for a certain file that gets created on servers that need reboot, and reboots that server based on how you want that done. You could make sure that certain nodes don't get rebooted if they have a particular pod on them, that they only reboot at a certain time, and more.&lt;/p&gt;

&lt;p&gt;Kured can be found on &lt;a href="https://github.com/weaveworks/kured"&gt;GitHub&lt;/a&gt;, and a detailed walkthrough can be found on &lt;a href="https://docs.microsoft.com/en-us/azure/aks/node-updates-kured?WT.mc_id=AZ-MVP-5004348"&gt;docs.microsoft.com&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Falco
&lt;/h3&gt;

&lt;p&gt;This one comes from the good people at Sysdig originally but has been donated to the CNCF as a project. Falco is deals with runtime security, through policies, and is very customizable.&lt;/p&gt;

&lt;p&gt;Falco can be set up to alert when certain policies are triggered, giving you a heads up if something is going on inside of your pods. We have our Azure policies that we have defined, but there are always exploits that can circumvent some of our security policies and this is where Falco can save our butts. Highly recommend reading up on it at &lt;a href="https://falco.org/"&gt;falco.org&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It's worth mentioning that &lt;em&gt;Azure Defender&lt;/em&gt; for AKS also have runtime security, but right now it seem to only work if you use the AKS-engine to host your own cluster or on Azure Arc enabled clusters. For native AKS support, we're waiting for them to create a daemonset for it. See GitHub issue &lt;a href="https://github.com/Azure/AKS/issues/2268"&gt;#2268&lt;/a&gt; for more information and to keep track of changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  HashiCorp Vault
&lt;/h3&gt;

&lt;p&gt;Keeping secrets safe is one of the fundamental tasks for anyone in IT. Most cloud platforms have their own tool for this, but I want to give HashiCorp Vault a shout-out here. Not only does it do what the other tools does, but it has some great features for dynamic secrets, a big compatibility list, and API driven design that makes it very accessible at any stage of development and operations.&lt;/p&gt;

&lt;p&gt;Just like Azure Key Vault, HahiCorp Vault can integrate into AKS through the Secret Store CSI. You can read about it &lt;a href="https://www.vaultproject.io/docs/platform/k8s/csi"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>azure</category>
      <category>security</category>
    </item>
    <item>
      <title>AKS and Azure Policy baseline standards - Making clusters compliant</title>
      <dc:creator>Roberth Strand</dc:creator>
      <pubDate>Sun, 25 Jul 2021 19:32:31 +0000</pubDate>
      <link>https://forem.com/roberthstrand/aks-and-azure-policy-baseline-standards-making-clusters-compliant-59b9</link>
      <guid>https://forem.com/roberthstrand/aks-and-azure-policy-baseline-standards-making-clusters-compliant-59b9</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--i_irIDi0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1543996991-8e851c2dc841%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DMnwxMTc3M3wwfDF8c2VhcmNofDEwfHxyb2Fkc3xlbnwwfHx8fDE2MjYyNjE0MTY%26ixlib%3Drb-1.2.1%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i_irIDi0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1543996991-8e851c2dc841%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DMnwxMTc3M3wwfDF8c2VhcmNofDEwfHxyb2Fkc3xlbnwwfHx8fDE2MjYyNjE0MTY%26ixlib%3Drb-1.2.1%26q%3D80%26w%3D2000" alt="AKS and Azure Policy baseline standards - Making clusters compliant" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Azure policy for AKS has been around for a while now, and is a great for that extra control. It uses the Open Policy Agent (OPA) Gatekeeper controller to validate and enforce policies, all hands off as it's handled through the Azure Policy add-on. As of writing, there are several built-in policies for AKS but you cannot create your own. But as far as I can tell, you should have more than enough capabilities to cover most situations and if you don't you probably should install and handle OPA policies manually.&lt;/p&gt;

&lt;p&gt;While researching this topic, I took a bunch of notes. I usually do this so that I can refer to it later. I later then realised that I should probably have written these as blog posts instead so others also can easily refer to it so that's what I'm rectifying now. From now one all my notes are going to be in the form of blog posts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes cluster pod security baseline standards for Linux-based workloads
&lt;/h2&gt;

&lt;p&gt;There are a bunch of Kubernetes related policies in Azure now, but so far there are only two built-in initiates. The baseline standard which has 5 policies, and the restricted standard which has 3 additional ones. Which one do you want? Well, as with everything that's for you to decide. I usually end up using the standard one at clients that are new to the technology, all depending on their need for security and control. This post will be a guide through the policies in the standard initiate, a follow up is in the making for the three other policies.&lt;/p&gt;

&lt;p&gt;Let's take a detailed look through the policies set by the initiative, then how we can write manifest that are compliant or how to exclude namespaces if we need to.&lt;/p&gt;

&lt;h2&gt;
  
  
  How deployments should look
&lt;/h2&gt;

&lt;p&gt;To be fully compliant with the security baseline there are almost nothing you need to do. It's more, what shouldn't you do. Almost everything that you shouldn't do in your average deployment defaults to to that, and is something that you don't need to define.&lt;/p&gt;

&lt;p&gt;So if your deployment doesn't explicitly allow the pod to run as privileged, use a hostPath as volume, is set to use host network, host process ID or IPC Namespace, you're all good.&lt;/p&gt;

&lt;p&gt;Don't know what this means? Down below I summarise all the policies used in this policy and what they actually mean. Have something that needs to be allowed, then I have added a quick note on that at the end.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes cluster pods should only use approved host network and port range
&lt;/h3&gt;

&lt;p&gt;This one consist of a boolean for if host network namespace should be allowed for pods, and a minimum and maximum port to set the range. The default value of &lt;em&gt;allowHostNetwork&lt;/em&gt; is false, so means that no host network and port range is allowed out of the box. This should be fine in most cases unless you have something that already needs that special privileges, like certain identity managers or monitoring solutions, and if that is the case one would just exclude their namespace.&lt;/p&gt;

&lt;p&gt;This follows the standard for a pod deployment, again something that you would have to specifically set. So, unless you create an application that needs this, you could just leave the deployment with no &lt;em&gt;hostNetwork&lt;/em&gt; set.&lt;/p&gt;

&lt;p&gt;See more details under &lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#podspec-v1-core"&gt;PodSpec&lt;/a&gt; in the API reference.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes cluster containers should not share host process ID or host IPC namespace
&lt;/h3&gt;

&lt;p&gt;Sharing PID namespace with the host process is probably needed for certain types of application that directly works against nodes, but this imposes a security risk if allowed by all. Again, both of these are set to false by default in a pod deployment. You would have to define &lt;em&gt;hostPID&lt;/em&gt; and &lt;em&gt;hostIPC&lt;/em&gt; as true and create an exemption for your application to get this through the policy.&lt;/p&gt;

&lt;p&gt;See more details under &lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#podspec-v1-core"&gt;PodSpec&lt;/a&gt; in the API reference.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes cluster pod hostPath volumes should only use allowed host paths
&lt;/h3&gt;

&lt;p&gt;The policy used by the initiate is set with an empty list, meaning that no hostPath volumes are allowed. The reason for this is that the hostPath type of volume is risky, because it allows the pod access directly to the host filesystem. There aren't actually any scenarios that I can think of where this is needed unless, again, the application is very special and have some sort of function for utilised by the entire platform. If that is the case, it probably would be best to add that application's namespace to the list of exemptions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes cluster containers should only use allowed capabilities
&lt;/h3&gt;

&lt;p&gt;Default settings here is an empty list, which means that there are no allowed capabilities by default. This is usually not a problem, as this only applies to containers that are set up with extra capabilities like NET_ADMIN. In other words, unless you actually need something extra you could just not define anything in your deployment and everything will work out.&lt;/p&gt;

&lt;p&gt;If you want to secure even more, you could define this particular policy with a list of capabilities to drop as well. This would then have to be reflected in your deployments, obviously. This might be something you would need if you are running certain workloads in a very strict environment but I think that this would be way too much upkeep for the average security conscious Kubernetes operator.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes cluster should not allow privileged containers
&lt;/h3&gt;

&lt;p&gt;By default, containers should not be allowed to run as privileged. This policy ensures that if you have a manifest where containers are allowed to run as privileged, it will get flagged as non-compliant. In the Kubernetes securityContext privileged defaults to false so you would have to explicit set it to true to be non-compliant.&lt;/p&gt;

&lt;p&gt;If you have an application that needs to run as privileged, I would put that in it's own namespace and create an exemption.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating exceptions
&lt;/h2&gt;

&lt;p&gt;Sometimes, you need to run some workload that requires a bit more access. This usually is monitoring solutions or security tools, for instance the Prometheus node exporter which needs access to the host network. So how do we deal with the exceptions to these policies?&lt;/p&gt;

&lt;p&gt;For this particular initiate, you have the following options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You could exclude certain namespaces&lt;/li&gt;
&lt;li&gt;You could include all namespaces you want effected by the policy, exceptions through omission&lt;/li&gt;
&lt;li&gt;You could create an exemption for a particular resource (AKS cluster)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are cases where any one of these is usable, but the easiest is probably excluding namespaces. This means that the policy will not apply to pods in namespaces of your choosing, and anyone deploying in others will then have to be compliant. If you look at the policy, Microsoft already has some namespaces here like kube-system and gatekeeper-system. Just add your namespace to the list, and you're done!&lt;/p&gt;

&lt;p&gt;For even more flexibility, like using labels to select pods you want excluded, you would need to create your own initiative or assign policies one by one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding a namespace exclusion
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x3ODHEMW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.robstr.dev/content/images/2021/07/image-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x3ODHEMW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.robstr.dev/content/images/2021/07/image-2.png" alt="AKS and Azure Policy baseline standards - Making clusters compliant" width="800" height="426"&gt;&lt;/a&gt;In the Policy portal, select assignments to find your active one. Search by name, or scope if needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2aLa_ZdQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.robstr.dev/content/images/2021/07/image-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2aLa_ZdQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.robstr.dev/content/images/2021/07/image-1.png" alt="AKS and Azure Policy baseline standards - Making clusters compliant" width="800" height="175"&gt;&lt;/a&gt;In your assigned initiate, click edit assignment&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C8xidpdx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.robstr.dev/content/images/2021/07/image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C8xidpdx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.robstr.dev/content/images/2021/07/image.png" alt="AKS and Azure Policy baseline standards - Making clusters compliant" width="800" height="258"&gt;&lt;/a&gt;Then add your namespace(s), and save your changes&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>azure</category>
      <category>aks</category>
      <category>container</category>
    </item>
    <item>
      <title>Adding a year worth of sprints in Azure DevOps with PowerShell</title>
      <dc:creator>Roberth Strand</dc:creator>
      <pubDate>Wed, 14 Jul 2021 18:17:32 +0000</pubDate>
      <link>https://forem.com/roberthstrand/adding-a-year-worth-of-sprints-in-azure-devops-with-powershell-4gng</link>
      <guid>https://forem.com/roberthstrand/adding-a-year-worth-of-sprints-in-azure-devops-with-powershell-4gng</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft0r6o1etq8x9r8tkeqv4.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft0r6o1etq8x9r8tkeqv4.jpeg" alt="Alt Text" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are working with Azure DevOps to keep track of your projects, you probably have to deal with sprints. Depending on the length of your sprints, and the fact that you can bulk add sprints, you probably either end up creating the next sprint during sprint planning or create every sprint manually.&lt;/p&gt;

&lt;p&gt;Obviously, we hate manual tasks, so this is something we need to fix. Especially since our team is using one week sprints, and I don't want to do anything 52 times. When you read through the following script, make sure to adjust accordingly to fit your sprint length.&lt;/p&gt;

&lt;p&gt;First, we need the VSTeam module. You can install it from &lt;a href="https://www.powershellgallery.com/packages/VSTeam/"&gt;PowerShellGallery&lt;/a&gt; by running the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Install-Module -Name VSTeam
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next you would need to add your personal access token to your session, so you can actually add the sprints. You can create a personal access token by  clicking the avatar with the cogwheel on in the right corner of Azure DevOps, and chose the menu option personal access tokens.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Set-VSTeamAccount -Account 'Organization Name' -PersonalAccessToken 'Token'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, we do all the magic stuff. We create a loop, which starts on the first Monday of the year and continues throughout the entire year.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Add the first date that we want the loop to start at.
# For the year of 2021, the first week would start on
# Monday the 4th
$date = Get-Date -Year 2021 -Month 1 -Day 4

while ($date.Year -ne "2022") {
    # Set date for the end of the sprint,
    # in this case a five day work week.
    $EndDate = $date.AddDays(4)
    if ($EndDate.Year -eq "2022") {
        # For the last sprint, so that it ends on the last day
        # of the year, not into next year.
        $EndDate = Get-Date -Year 2021 -Month 12 -Day 31
    }
    # Using Unix format, we can easily get the week number:
    $week = Get-Date $date -UFormat "%V"

    # Putting it all together, and adding the next sprint.
    # Notice that the name is using the week variable
    # in the format that our team uses.
    $Sprint = @{
        Name = "2021-W$week"
        ProjectName = 'ProjectName'
        StartDate = $date
        FinishDate = $EndDate
    }
    Add-VSTeamIteration @Sprint

    # bump the date we are working on with a week
    $date = $date.AddDays(7)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Like I mentioned, this creates the 52 weekly sprints for the year 2021. If you need to, you could set the end date to add 9 days instead of 4 so that you get two weeks, and then you would need to bump the date at the end by 14.&lt;/p&gt;

</description>
      <category>powershell</category>
      <category>devops</category>
    </item>
    <item>
      <title>What a Terraform module should, and should not do</title>
      <dc:creator>Roberth Strand</dc:creator>
      <pubDate>Tue, 18 May 2021 17:12:51 +0000</pubDate>
      <link>https://forem.com/roberthstrand/what-a-terraform-module-should-and-should-not-do-4k69</link>
      <guid>https://forem.com/roberthstrand/what-a-terraform-module-should-and-should-not-do-4k69</guid>
      <description>&lt;p&gt;Terraform, like so many other great languages and systems, use modules to help you categories specific pieces of code so that you can reuse it. Ever since I started working with Terraform, I've heard so many variants of what one should use modules for, and I thought I'd share what I have concluded here. Please note, these are just my opinions and even though I work according to them, there is always room for interpretation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Everything is a module in Terraform, but you shouldn't put everything in a module
&lt;/h2&gt;

&lt;p&gt;Just to clarify, your base Terraform code is considered a module, a root module. This one module does keep track of everything and this is where you execute Terraform. When you call a module from the root module, they are called a child module. What should you move out from the root, and what should you keep?&lt;/p&gt;

&lt;p&gt;I think that a healthy root module has a mix of resources and modules. When working with Azure, you don't need to move resource groups creation into a module, they are just a few lines to define. They are also resources that is used by other resources, and should probably be left out of a child module so that their lifecycle is not handled outside of the root module. There are some management patterns that contradict that, but those are edge cases which need to be treated differently.&lt;/p&gt;

&lt;h2&gt;
  
  
  When do we move resources to a child module
&lt;/h2&gt;

&lt;p&gt;I believe that we should keep resources that are coupled in the same module, as long as it simplifies the structure of the root module, and as long as we keep out resources that have their own lifecycle and dependencies. If we use the resource group as an example of what not to bring into a child module, network interfaces and data disks would live together with the virtual machine as they share the same lifecycle. As always, there are edge cases where this is not true and that's why you need to make sure that your module is flexible enough to handle those. That is a totally different topic altogether.&lt;/p&gt;

&lt;p&gt;Virtual machines and their supportive resources, that's one thing. What if you add multiple resources that needs more than one provider? Where should one draw the line? I still think that certain complex modules can exist, but the potential upkeep of complexity should be considered. I would argue that deploying a managed Kubernetes cluster should allow the option to create service accounts and namespaces, but incorporating deployments or the likes? That might bring the complexity to a whole different level. But, if you are in the business of deploying many clusters and they all use Prometheus and Linkerd, maybe having the module deal with that is acceptable. If you have a one cluster that needs these deployments, don't make the module more complex than it has to. Also, application deployment should probably follow one workflow so this might not be the right place to streamline your monitoring solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I honestly don't know if this post will be of any help to anyone. We can easily conclude with "it depends", which is always an anti-climatic statement that is very often true. But what I always come back to is this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use modules&lt;/strong&gt; , it's a great way to keep yourself from repetition and help you keep your code clean.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don't stuff everything into a module&lt;/strong&gt; , if it doesn't follow the lifecycle or process you're trying to simplify.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start small and expand&lt;/strong&gt; , complexity makes it hard to maintain.&lt;/p&gt;

&lt;p&gt;Feel free to be &lt;strong&gt;opinionated&lt;/strong&gt; , but &lt;strong&gt;allow users to decide&lt;/strong&gt; if they want to follow your opinion. Sometimes you want to bring your own network interface, sometimes you don't, the option is always nice to have.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>cloudengineer</category>
      <category>cloudarchitect</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>How to reference Key Vault secrets from other subscriptions in Terraform</title>
      <dc:creator>Roberth Strand</dc:creator>
      <pubDate>Fri, 16 Apr 2021 18:44:18 +0000</pubDate>
      <link>https://forem.com/roberthstrand/how-to-reference-key-vault-secrets-from-other-subscriptions-in-terraform-512</link>
      <guid>https://forem.com/roberthstrand/how-to-reference-key-vault-secrets-from-other-subscriptions-in-terraform-512</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1483706600674-e0c87d3fe85b%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DMnwxMTc3M3wwfDF8c2VhcmNofDF8fHNlY3JldHxlbnwwfHx8fDE2MTg1OTY5NDM%26ixlib%3Drb-1.2.1%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1483706600674-e0c87d3fe85b%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DMnwxMTc3M3wwfDF8c2VhcmNofDF8fHNlY3JldHxlbnwwfHx8fDE2MTg1OTY5NDM%26ixlib%3Drb-1.2.1%26q%3D80%26w%3D2000" alt="How to reference Key Vault secrets from other subscriptions in Terraform"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the great things about working with Terraform is the ability to use data sources as a way to reference existing resources, like secrets from Azure Key Vault. However, working with Azure means that one might have to work with resources in more than one subscription at the time. The way to solve this is to set up two &lt;em&gt;azurerm&lt;/em&gt; provider blocks, one for the context that you are working in and one for the other subscription, separating them by using the alias argument.&lt;/p&gt;

&lt;p&gt;Here is an example of how it works in practice.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

terraform {
  required_providers {
    azurerm = {
      source = "hashicorp/azurerm"
      version = "2.56.0"
    }
  }
}

# Default provider block, note that there is no alias set here
provider "azurerm" {
  features {}

  subscription_id = "00000000-0000-0000-0000-000000000000"
}

# Provider for the "management" subscription where we have our key vault
provider "azurerm" {
  features {}

  alias = "management"
  subscription_id = "00000000-0000-0000-0000-000000000000"
}

# Data source, using the aliased provider to get the right context
data "azurerm_key_vault_secret" "example" {
  provider = azurerm.management

  name = "administrator"
  key_vault_id = data.azurerm_key_vault.existing.id
}

# How to output the secret
output "secret_value" {
  value = data.azurerm_key_vault_secret.example.value
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Obviously, this isn't limited to just key vault secrets but applies to everything you might want to do within the context of a different subscription.&lt;/p&gt;

&lt;p&gt;Any questions about Terraform, feel free to ask me through &lt;a href="https://twitter.com/roberthtweets" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; and I'll create a blog post about it.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>azure</category>
      <category>cloudarchitect</category>
      <category>cloudengineer</category>
    </item>
    <item>
      <title>List all VNet and Subnets across multiple subscriptions</title>
      <dc:creator>Roberth Strand</dc:creator>
      <pubDate>Wed, 17 Feb 2021 07:00:00 +0000</pubDate>
      <link>https://forem.com/roberthstrand/list-all-vnet-and-subnets-across-multiple-subscriptions-4028</link>
      <guid>https://forem.com/roberthstrand/list-all-vnet-and-subnets-across-multiple-subscriptions-4028</guid>
      <description>&lt;p&gt;It has happened to everyone, the network sprawl. You might have on-premises networks and virtual networks, maybe even in multiple clouds, and at one point you simply have lost count of your ranges and what they are used for. Usually, these ranges come from someone that is responsible for IP-ranges (preferably an IPAM solution) but what if you have a lot of teams creating VNet in a bunch of subscriptions? Well, it can get out of hand quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The script
&lt;/h2&gt;

&lt;p&gt;If you are interested in learning how this script works, we’ll continue the blog post after the code. For those who just want to run the script, here you go:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;Get-AzSubscription&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Foreach-Object&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nv"&gt;$sub&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Set-AzContext&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-SubscriptionId&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;$_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;SubscriptionId&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nv"&gt;$vnets&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Get-AzVirtualNetwork&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="kr"&gt;foreach&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$vnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kr"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$vnets&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;PSCustomObject&lt;/span&gt;&lt;span class="p"&gt;]@{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nx"&gt;Subscription&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$sub&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Subscription&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Name&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nx"&gt;Name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$vnet&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Name&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nx"&gt;Vnet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$vnet&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;AddressSpace&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;AddressPrefixes&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;join&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;', '&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nx"&gt;Subnets&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$vnet&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Subnets&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;AddressPrefix&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;join&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;', '&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Export-Csv&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Delimiter&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;";"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AzureVnet.csv"&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will export the results to CSV, but if you don’t want that you can remove the last pipe and the cmdlet Export-Csv.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note that you need to have the Az-module installed. You also have to be connected to Azure with an account that can at least read all the subscriptions and network resources.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How the script works
&lt;/h2&gt;

&lt;p&gt;We start off by getting all the subscriptions available and running them one by one through a &lt;em&gt;for each&lt;/em&gt; loop. So for every subscription, we set the active context to that subscription and populate the variable &lt;code&gt;$vnets&lt;/code&gt; with all Virtual Networks in that subscription.&lt;/p&gt;

&lt;p&gt;We run through another for each loop, where we create one new PSCustomObject per VNet in our &lt;code&gt;$vnets&lt;/code&gt; variable. This is how we will represent our information, and the first couple of values makes sense. We set &lt;em&gt;Subscription&lt;/em&gt; to the name of our current subscription, and the name of the Vnet as the &lt;em&gt;Name&lt;/em&gt; field.&lt;/p&gt;

&lt;p&gt;For our VNet address space and subnets, we could just point to the value from $vnet and be done with it. This works perfectly if you just want the results in the terminal. What I want, is to export this as a CSV so I can share this with whoever needs the list. If you try to export this value and it’s more than one, you will not get an IP range but the text &lt;code&gt;System.Collections.Generic.List&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To get around this refer to the value we want, and use the &lt;a href="https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_join?view=powershell-7"&gt;join operator&lt;/a&gt; to join all the values together, separated by a comma. I also added a space after the comma to make it more readable. The VNet address space and the subnet can be multiple values, so I had to use the join operator for both of them.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cloudadministration</category>
      <category>powershell</category>
    </item>
    <item>
      <title>Code coverage for PowerShell module development</title>
      <dc:creator>Roberth Strand</dc:creator>
      <pubDate>Tue, 16 Feb 2021 13:30:00 +0000</pubDate>
      <link>https://forem.com/roberthstrand/code-coverage-for-powershell-module-development-1kh4</link>
      <guid>https://forem.com/roberthstrand/code-coverage-for-powershell-module-development-1kh4</guid>
      <description>&lt;p&gt;Code coverage can be a controversial topic, if you let it, but I feel that it is one of the many tools one can use to make sure that you’re on the right track. For those not aware, code coverage (or test coverage) is a measurement of how much of your source code you’re testing. The though being that the more of your code you are testing, the better. I agree, and I try to write tests for every single function that I write in a module.&lt;/p&gt;

&lt;p&gt;In this particular blog, we’re going to explore how to create code coverage metrics and automatically send the results to the service &lt;a href="https://codecov.io" rel="noopener noreferrer"&gt;codecov.io&lt;/a&gt;, which in turn will present the results so that we can see change over time. Codecov has a generous free tier, which gives you unlimited repositories, and all the bells and whistles!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frobstr.dev%2Fimages%2Fposts%2F310-codecov.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frobstr.dev%2Fimages%2Fposts%2F310-codecov.png" alt="https://robstr.dev/images/posts/310-codecov.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example of the overview dashboard you get at Codecov. Source: &lt;a href="https://codecov.io/gh/CrayonGroup/CloudiQ.PowerShell/" rel="noopener noreferrer"&gt;Cloud-iQ PowerShell module&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The setup, ready Codecov and the GitHub actions
&lt;/h2&gt;

&lt;p&gt;This is actually a pretty easy process. What we want is to get registered on Codecov, and find the upload token for our repository. You can follow the few step instruction found in the quick start section over at &lt;a href="https://docs.codecov.io/docs/quick-start#basic-usage" rel="noopener noreferrer"&gt;Codecov.io&lt;/a&gt;. Set up the upload token as a secret in your GitHub repository.&lt;/p&gt;

&lt;p&gt;The GitHub action can be set up like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Code&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Tests'&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;master&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;master&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;codecov&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;checkout&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pester&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;Set-PSRepository psgallery -InstallationPolicy trusted&lt;/span&gt;
          &lt;span class="s"&gt;Install-Module -Name Pester -RequiredVersion 5.0.4 -Force;&lt;/span&gt;
          &lt;span class="s"&gt;$paths = @(&lt;/span&gt;
            &lt;span class="s"&gt;'.\path01\*.ps1'&lt;/span&gt;
            &lt;span class="s"&gt;'.\path02\*.ps1'&lt;/span&gt;
            &lt;span class="s"&gt;)&lt;/span&gt;
          &lt;span class="s"&gt;Invoke-Pester -Path "tests" -CodeCoverage $paths -CodeCoverageOutputFileFormat "JaCoCo";&lt;/span&gt;
        &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pwsh&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Codecov&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;codecov/codecov-action@v1.0.13&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;BuildName&lt;/span&gt;
          &lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.CODECOV }}&lt;/span&gt;
          &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;coverage.xml&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we define what paths we want Pester to create a code coverage report for, and that the format should be &lt;em&gt;JaCoCo&lt;/em&gt; which is something that Codecov understands. Next, we upload the xml file to Codecov using their own GitHub action, supplying the upload token which we set as a repository secret.&lt;/p&gt;

&lt;p&gt;Feel free to combine the codecov job with other pester tests. For instance, take a look at how I have set up a multi-platform test in one job and the code coverage job in another &lt;a href="https://github.com/CrayonGroup/CloudiQ.PowerShell/blob/master/.github/workflows/code-tests.yml" rel="noopener noreferrer"&gt;here on GitHub&lt;/a&gt;. For details, see my blog post &lt;a href="https://dev.to/roberthstrand/using-github-actions-to-run-automatic-pester-tests-cm0-temp-slug-6057590"&gt;Using GitHub actions to run automatic Pester tests&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>powershell</category>
      <category>powershellmodule</category>
      <category>devops</category>
      <category>github</category>
    </item>
    <item>
      <title>Publish to PowerShellGallery with GitHub Actions</title>
      <dc:creator>Roberth Strand</dc:creator>
      <pubDate>Thu, 21 Jan 2021 14:04:05 +0000</pubDate>
      <link>https://forem.com/roberthstrand/publish-to-powershellgallery-with-github-actions-1caj</link>
      <guid>https://forem.com/roberthstrand/publish-to-powershellgallery-with-github-actions-1caj</guid>
      <description>&lt;p&gt;My next step in automating my PowerShell module development workflow is to have my module deploy to &lt;a href="https://powershellgallery.com"&gt;PowerShellGallery&lt;/a&gt; when creating a GitHub release. Last time it was doing &lt;a href="https://dev.to/roberthstrand/using-github-actions-to-run-automatic-pester-tests-cm0-temp-slug-6057590"&gt;unit testing with pester&lt;/a&gt;, now we want our code to get out in the world.&lt;/p&gt;

&lt;p&gt;What I want to accomplish is pretty simple, to make my release process simple. By using &lt;a href="https://github.com/features/actions"&gt;GitHub Actions&lt;/a&gt;, we can trigger tasks by creating a new release. When creating a release, we checkout our code and run &lt;code&gt;Publish-Module&lt;/code&gt; like we would locally on our machine. We need an &lt;strong&gt;API Key&lt;/strong&gt; , which you can find when you log into PowerShellGallery, and that’s about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add the API key as a secret
&lt;/h2&gt;

&lt;p&gt;Under settings in the repository you want to set up publishing from, you find the menu item called &lt;em&gt;Secrets&lt;/em&gt;. Press that big &lt;em&gt;New secret&lt;/em&gt; button to add your secret. When you do, you can edit it but you can replace or delete it.&lt;/p&gt;

&lt;p&gt;As you can see from my repo, I got one called &lt;em&gt;PSGALLERY&lt;/em&gt; and one &lt;em&gt;CODECOV&lt;/em&gt;, each of them for the respective services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AZpj7_wg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://robstr.dev/images/posts/image-20201104115018721.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AZpj7_wg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://robstr.dev/images/posts/image-20201104115018721.png" alt="image-20201104115018721" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s see how we can set up our workflow and reference that secret!&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the workflow
&lt;/h2&gt;

&lt;p&gt;Let us take a look at the code, then I can explain what is going on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: PSGallery
on:
  release:
    types: [published]
jobs:
  psgallery_publish:
    runs-on: ubuntu-latest
    steps:
      - name: checkout
        uses: actions/checkout@v2

      - name: Publishing
        run: |
          Publish-Module -Path '...' -NuGetApiKey $
        shell: pwsh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First of all, we define when this workflow is triggered. What we want, is to have this run every time a new release is created and published. Types here can be everything from &lt;em&gt;unpublished&lt;/em&gt; to &lt;em&gt;edited&lt;/em&gt; so if you have any special needs, the &lt;a href="https://docs.github.com/en/free-pro-team@latest/actions/reference/events-that-trigger-workflows#release"&gt;GitHub Actions documentation&lt;/a&gt; covers everything you need to know.&lt;/p&gt;

&lt;p&gt;I have created one job called &lt;em&gt;psgallery_publish&lt;/em&gt;, that has to steps. One to check out the code, so we have the code locally on the agent we’re using, and one to run the line of PowerShell that actually publishes the module. I usually have the actual code for the module in a directory with the same name as the module itself, that goes into the &lt;em&gt;-Path&lt;/em&gt; parameter.&lt;/p&gt;

&lt;p&gt;For our secret, we can fetch this by using the &lt;code&gt;$&lt;/code&gt; snippet. This ensures that you don’t have your actual secret in your public code, and makes it easy to maintain if you ever need to change this key.&lt;/p&gt;

</description>
      <category>powershell</category>
      <category>powershellmodule</category>
      <category>devops</category>
      <category>github</category>
    </item>
    <item>
      <title>Speaking about deploying AKS with Terraform, at Azure User Group Norway</title>
      <dc:creator>Roberth Strand</dc:creator>
      <pubDate>Thu, 21 Jan 2021 13:58:41 +0000</pubDate>
      <link>https://forem.com/roberthstrand/speaking-about-deploying-aks-with-terraform-at-azure-user-group-norway-3ek6</link>
      <guid>https://forem.com/roberthstrand/speaking-about-deploying-aks-with-terraform-at-azure-user-group-norway-3ek6</guid>
      <description>&lt;p&gt;Next Wednesday, I will be speaking at the Norwegian Azure User Group about Terraform and Azure Kubernetes Service. During the process of creating a Terraform module for a client, I had to solve a bunch of problems related to creating flexible modules which was a great learning experience. As I started to recreate the module for public usage, I also started taking notes on how this works and that's were the basis for this talk comes from.&lt;/p&gt;

&lt;p&gt;Afterwards, I'll be talking to Jan Egil Ring (Crayon) and Martin Ehrnst (Vipps) about Terraform, Kubernetes and Azure in general in a roundtable-type discussion.&lt;/p&gt;

&lt;p&gt;The event is, of course, free and in English so feel free to join us at &lt;a href="https://www.meetup.com/Azure-User-Group-Norway/events/275667273/"&gt;meetup.com&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>azure</category>
      <category>azurekubernetesservice</category>
      <category>aks</category>
    </item>
  </channel>
</rss>
