<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Tommy Falgout</title>
    <description>The latest articles on Forem by Tommy Falgout (@lastcoolnameleft).</description>
    <link>https://forem.com/lastcoolnameleft</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/lastcoolnameleft"/>
    <language>en</language>
    <item>
      <title>GitOps for Presentations</title>
      <dc:creator>Tommy Falgout</dc:creator>
      <pubDate>Fri, 04 Aug 2023 16:54:59 +0000</pubDate>
      <link>https://forem.com/lastcoolnameleft/gitops-for-presentations-2oci</link>
      <guid>https://forem.com/lastcoolnameleft/gitops-for-presentations-2oci</guid>
      <description>&lt;p&gt;Yes, I work for Microsoft. No, I do not like PowerPoint.  Here’s &lt;a href="https://lastcoolnameleft.github.io/marp-template/"&gt;my alternative&lt;/a&gt; with the &lt;a href="https://github.com/lastcoolnameleft/marp-template"&gt;source code&lt;/a&gt; which I’ll explain here.&lt;/p&gt;

&lt;p&gt;For 20+ years I’ve done UNIX/Linux development and have worked at Microsoft for 6 years.  And I’ve learned that Microsoft will typically build the all-encompassing Enterprise-ready solution and the OSS ecosystem will build a narrow-focused tool that you can piece together with others.&lt;/p&gt;

&lt;p&gt;Each have their own benefits and constraints.  There is &lt;a href="https://en.wikipedia.org/wiki/No_Silver_Bullet"&gt;No Silver Bullet&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A common set of requirements I encounter are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I need to easily present to a public audience&lt;/li&gt;
&lt;li&gt;I might have to use someone else’s computer&lt;/li&gt;
&lt;li&gt;I want to share the slides afterwards&lt;/li&gt;
&lt;li&gt;I need to quickly update the slides&lt;/li&gt;
&lt;li&gt;I just want to display text and images.  (PowerPoint is an absurdly impressive tool with lots of features that I rarely use.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Internal Microsoft SharePoint policy prevents sharing slides with external visitors.  This often results in emailing 10-100MB PPTs or PDF files around.  Blah!&lt;/p&gt;

&lt;p&gt;Piecing bits of OSS, I present to you “GitOps for Presentations”.  It involves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Git + GitHub - Version control of content&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://daringfireball.net/projects/markdown/"&gt;Markdown&lt;/a&gt; - Easy styling of content&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://marp.app/"&gt;MARP&lt;/a&gt; - Converts &lt;a href="https://commonmark.org/"&gt;CommonMark&lt;/a&gt; to HTML, PDF, PPT&lt;/li&gt;
&lt;li&gt;VSCode - Edit the content (There’s even a &lt;a href="https://marketplace.visualstudio.com/items?itemName=marp-team.marp-vscode"&gt;MARP extension&lt;/a&gt; which allows you to preview in real-time!)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/features/actions"&gt;GitHub Actions&lt;/a&gt; - Build the presentation from Markdown&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://pages.github.com/"&gt;GitHub Pages&lt;/a&gt; - Host the presentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Free (&lt;a href="https://www.notion.so/GitOps-for-Presentations-2be5e92c47c343f5942e13b3910a858f?pvs=21"&gt;as in beer&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Simple to setup&lt;/li&gt;
&lt;li&gt;You can style your presentations.  For example, I’ve created a &lt;a href="https://github.com/lastcoolnameleft/marp-template/blob/main/themes/microsoft.css"&gt;CSS which models the Microsoft styling guides for PPT’s&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Easy to share and update&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MARP’s formatting is basic.  Especially if you’re coming from PowerPoint&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s cool, but why didn’t you …&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;use &lt;a href="https://remarkjs.com/#1"&gt;Remark&lt;/a&gt; or &lt;a href="https://revealjs.com/"&gt;Reveal.js&lt;/a&gt;?

&lt;ul&gt;
&lt;li&gt;There’s many great presentations frameworks, but I wanted something really simple. &lt;a href="https://en.wikipedia.org/wiki/KISS_principle"&gt;KISS&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;You should be able to replace MARP with any of those other frameworks and still get the same results.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;just present your PPT and email it?

&lt;ul&gt;
&lt;li&gt;That requires work and time.  At conferences, I don’t have time/might forget to follow-up with everyone.  I create a QR code and put it at the end of the slides.  This enables self-service, discovery and also &lt;a href="https://www.hanselman.com/blog/do-they-deserve-the-gift-of-your-keystrokes"&gt;saves me previous keystrokes&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;use &lt;a href="http://slides.com"&gt;slides.com&lt;/a&gt; or Google Slides?

&lt;ul&gt;
&lt;li&gt;Microsoft has embraced OSS and purchased GitHub, so I wanted to find a way to explore integrating all of this.  I’ve been very happy with the results!&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’m sold!  How do I get started?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I’ve made it easy for anyone to get started by creating a &lt;a href="https://github.com/lastcoolnameleft/marp-template"&gt;GitHub template for this project&lt;/a&gt; (&lt;a href="https://lastcoolnameleft.github.io/marp-template/"&gt;which is also a presentation&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Click “Use this template” and create a new repository&lt;/li&gt;
&lt;li&gt;Enable GitHub Actions to auto-publish to GitHub Pages

&lt;ul&gt;
&lt;li&gt;In your new Repo, click &lt;code&gt;Settings&lt;/code&gt; -&amp;gt; &lt;code&gt;Pages&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;Set Source to &lt;code&gt;GitHub Actions&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;You’re done!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;PEDANTIC DISCLAIMER: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I’m quite familiar with &lt;a href="https://www.weave.works/technologies/gitops/"&gt;GitOps&lt;/a&gt;, and while this is outside of running Kubernetes clusters as IaC, there are some similarities with the top-level concept of using Git to set my desired state of my presentation.&lt;/li&gt;
&lt;li&gt;MARP technically uses &lt;a href="https://commonmark.org/"&gt;CommonMark&lt;/a&gt;.  It’s close enough for what most people will need&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>github</category>
      <category>gitops</category>
      <category>presentation</category>
      <category>markdown</category>
    </item>
    <item>
      <title>Managing Azure Subscription Quota and Throttling Issues</title>
      <dc:creator>Tommy Falgout</dc:creator>
      <pubDate>Wed, 21 Jun 2023 19:50:28 +0000</pubDate>
      <link>https://forem.com/lastcoolnameleft/managing-azure-subscription-quota-and-throttling-issues-dp4</link>
      <guid>https://forem.com/lastcoolnameleft/managing-azure-subscription-quota-and-throttling-issues-dp4</guid>
      <description>&lt;p&gt;As Azure customers and partners build bigger and more complex solutions in their subscriptions, you might hit quota and throttling issues.  These can be irksome and cause confusion.  This article will walkthrough some of the scenarios I’ve seen and how to design with them in mind.&lt;/p&gt;

&lt;p&gt;Let’s make sure we’re on the same page regarding terminology used in this article:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/overview"&gt;Azure Resource Manager&lt;/a&gt; (ARM) - The management layer and API behind all Azure resources&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/resource-providers-and-types"&gt;Resource Provider&lt;/a&gt; (RP) - Each resource type inside Azure has a RP which allows you to manage that resource (e.g. Storage, Key Vault, VMSS, etc.)&lt;/li&gt;
&lt;li&gt;Quota - the maximum number of a specific resource available for your subscription.  Similar to a credit card limit

&lt;ul&gt;
&lt;li&gt;Examples:

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits"&gt;Subscription or Resource Quota&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits#standard-storage-account-limits"&gt;Max RPS for Storage account&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/storage/blobs/scalability-targets#scale-targets-for-blob-storage"&gt;Max size of single blob container&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-functions-limits"&gt;Azure Function default timeout&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits#virtual-machine-scale-sets-limits"&gt;Maximum # of VMs in a VMSS&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Some quotas have &lt;a href="https://learn.microsoft.com/en-us/azure/quotas/quotas-overview#adjustable-and-non-adjustable-quotas"&gt;adjustable and non-adjustable quotas&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Some adjustable quotas can be managed programmatically using the &lt;a href="https://learn.microsoft.com/en-us/rest/api/quota/"&gt;Azure Quota Service API&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Throttling - maximum number of API requests you can make in a certain period.  Similar to bandwidth throttling

&lt;ul&gt;
&lt;li&gt;NOTE: There are subscription and tenant level throttling limits.  Each &lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/request-limits-and-throttling#storage-throttling"&gt;Storage&lt;/a&gt;, &lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/request-limits-and-throttling#network-throttling"&gt;Networking&lt;/a&gt;, &lt;a href="https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/troubleshooting-throttling-errors"&gt;Compute&lt;/a&gt; and &lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/request-limits-and-throttling#azure-resource-graph-throttling"&gt;Azure Resource Graph&lt;/a&gt; also have throttling limits&lt;/li&gt;
&lt;li&gt;NOTE: Throttling for RP’s are per subscription per region&lt;/li&gt;
&lt;li&gt;Examples:

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/request-limits-and-throttling#subscription-and-tenant-limits"&gt;Rate limit of writes to a subscription per hour&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/troubleshooting-throttling-errors#call-rate-informational-response-headers"&gt;Rate limit of Deleting a VMSS in 3 min&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Managing Quotas
&lt;/h2&gt;

&lt;p&gt;Because quotas are mostly static, &lt;a href="https://learn.microsoft.com/en-us/azure/quotas/view-quotas"&gt;viewing your quotas is pretty simple&lt;/a&gt;.  Simply to go the Azure Portal and click on “My quotas”.&lt;/p&gt;

&lt;p&gt;If you need to increase your quota, you might need to open an Azure Support ticket.  For example, if you need to start deploying in a new region, you might need to open a ticket to increase the “Total Regional vCPUs” and “VMSS” quotas in “West Central US”.  Once the ticket has been approved, the quota will be available to you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Throttling
&lt;/h2&gt;

&lt;p&gt;For the most part, you won’t need to worry about throttling, but if you’re doing very large scale deployments with LOTS of constant churning of resources, you might hit throttling limits.&lt;/p&gt;

&lt;p&gt;These limits are less about the number of resources, but &lt;strong&gt;HOW&lt;/strong&gt; you use the resources.  For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can have &lt;a href="https://learn.microsoft.com/en-us/azure/aks/quotas-skus-regions#service-quotas-and-limits"&gt;5000 AKS cluster in one subscription&lt;/a&gt;, each AKS cluster can have a maximum of 100 node pools.  If you try creating the max # of AKS clusters with the max # of node pools simultaneously, then you’ll definitely hit the throttling limit.&lt;/li&gt;
&lt;li&gt;Some OSS projects aggressively call ARM and the RP API’s in a reconciliation loop.  Multiple instances of these projects will also hit the throttling limit.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since throttling is specific to the current time window, it can be trickier.  There’s no “hard formula” for when you’ll hit a threshold.  But when you do, &lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/request-limits-and-throttling#error-code"&gt;you’ll probably start seeing 429 HTTP status responses&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Throttling Examples
&lt;/h2&gt;

&lt;p&gt;Thankfully, you can get insights into your current throttling status by looking at response headers for the requests.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;x-ms-ratelimit-remaining-subscription-reads&lt;/code&gt; - # of read operations to this subscription remaining&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;x-ms-ratelimit-remaining-subscription-writes&lt;/code&gt; - # of writes operations to this subscription remaining&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;x-ms-ratelimit-remaining-resource&lt;/code&gt; - Compute RP specific header, which could show multiple policy statuses.  (see “Read Request for GETting a VMSS” below for details)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s dig into this deeper using the Azure CLI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example: Create a Resource Group (Write Request)
&lt;/h3&gt;

&lt;p&gt;Because this request creates a RG, it will count against our subscription writes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;➜  az group create -n $RG --location $LOCATION --verbose --debug --debug 2&amp;gt;&amp;amp;1 | grep 'x-ms'

DEBUG: cli.azure.cli.core.sdk.policies:     'x-ms-client-request-id': '&lt;span class="nt"&gt;&amp;lt;GUID&amp;gt;&lt;/span&gt;'
DEBUG: cli.azure.cli.core.sdk.policies:     'x-ms-ratelimit-remaining-subscription-writes': '1199'
DEBUG: cli.azure.cli.core.sdk.policies:     'x-ms-request-id': '&lt;span class="nt"&gt;&amp;lt;GUID&amp;gt;&lt;/span&gt;'
DEBUG: cli.azure.cli.core.sdk.policies:     'x-ms-correlation-request-id': '&lt;span class="nt"&gt;&amp;lt;GUID&amp;gt;&lt;/span&gt;'
DEBUG: cli.azure.cli.core.sdk.policies:     'x-ms-routing-request-id': 'SOUTHCENTRALUS:20230512T163152Z:&lt;span class="nt"&gt;&amp;lt;GUID&amp;gt;&lt;/span&gt;'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: The key point is how the &lt;code&gt;x-ms-ratelimit-remaining-subscription-writes&lt;/code&gt; is now 1199 (instead of the standard 1200 per hour as per the &lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/request-limits-and-throttling#retrieving-the-header-values"&gt;Subscription and Tenant limits&lt;/a&gt;)&lt;/p&gt;

&lt;h3&gt;
  
  
  Example: GET a VMSS (Read Request)
&lt;/h3&gt;

&lt;p&gt;This request performs a GET (read) request on an existing VMSS.  This is similar to the write request for the RG, but since Compute RP also has a separate set of throttling policies, it also counts against the Compute RP limits.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;➜  az vmss show -n $VMSS_NAME -g $RG --debug 2&amp;gt;&amp;amp;1 | grep x-ms
DEBUG: cli.azure.cli.core.sdk.policies:     'x-ms-client-request-id': '&lt;span class="nt"&gt;&amp;lt;GUID&amp;gt;&lt;/span&gt;'
DEBUG: cli.azure.cli.core.sdk.policies:     'x-ms-ratelimit-remaining-resource': 'Microsoft.Compute/GetVMScaleSet3Min;197,Microsoft.Compute/GetVMScaleSet30Min;1297'
DEBUG: cli.azure.cli.core.sdk.policies:     'x-ms-request-id': '&lt;span class="nt"&gt;&amp;lt;GUID&amp;gt;&lt;/span&gt;'
DEBUG: cli.azure.cli.core.sdk.policies:     'x-ms-ratelimit-remaining-subscription-reads': '11999'
DEBUG: cli.azure.cli.core.sdk.policies:     'x-ms-correlation-request-id': '&lt;span class="nt"&gt;&amp;lt;GUID&amp;gt;&lt;/span&gt;'
DEBUG: cli.azure.cli.core.sdk.policies:     'x-ms-routing-request-id': 'SOUTHCENTRALUS:20230512T162738Z:&lt;span class="nt"&gt;&amp;lt;GUID&amp;gt;&lt;/span&gt;'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NOTE: The key point is how &lt;code&gt;x-ms-ratelimit-remaining-resource&lt;/code&gt; has two key-value pairs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Microsoft.Compute/GetVMScaleSet3Min;197 - I ran this command before, so I have 197 requests available in the 3 minute window for performing GET requests on the VMSS resource&lt;/li&gt;
&lt;li&gt;Microsoft.Compute/GetVMScaleSet30Min;1297 - I now have 1297 requests available in the 30 minute window for performing GET requests on VMSS resources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;NOTE: &lt;code&gt;x-ms-ratelimit-remaining-subscription-reads&lt;/code&gt; doesn’t seem to decrease (11999).  Even if I run the same command again.  I haven’t figured that out yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Designing with quotas and throttling in mind
&lt;/h2&gt;

&lt;p&gt;Most Azure deployments won’t need this type of fine tuning, but just in case, there’s some &lt;a href="https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/troubleshooting-throttling-errors#best-practices"&gt;documented Throttling Best Practices&lt;/a&gt; as well as my personal pro-tips:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use the Azure SDK, as many services have the &lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/request-limits-and-throttling#error-code"&gt;recommended retry guidance built-in&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Instead of creating and deleting VMSS (which consume multiple VMSS API requests), scale the VMSS to 0 (which only consumes 1 VMSS API request)&lt;/li&gt;
&lt;li&gt;Any type of Kubernetes cluster auto-scaler will perform a reconciliation loop with Azure Compute RP.  This could eat into your throttling limits&lt;/li&gt;
&lt;li&gt;Use the &lt;a href="https://learn.microsoft.com/en-us/rest/api/quota/"&gt;Azure Quota Service API&lt;/a&gt; to programmatically request quota increases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re unable to workaround the throttling limits, then the next step is to look at the &lt;a href="https://learn.microsoft.com/en-us/azure/architecture/patterns/deployment-stamp"&gt;Deployment Stamp pattern&lt;/a&gt; using multiple subscriptions.  You can programmatically create subscriptions using &lt;a href="https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/ready/landing-zone/design-area/subscription-vending"&gt;Subscription vending&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Hopefully this article has helped you understand quotas limits and throttling limits in Azure, and how to work around them.  Let me know if you have any additional questions and/or feedback and I can follow-up with additional details.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cloud</category>
      <category>architecture</category>
    </item>
    <item>
      <title>When, How and Where to use ClusterAPI (CAPI) and ClusterAPI for Azure (CAPZ)</title>
      <dc:creator>Tommy Falgout</dc:creator>
      <pubDate>Mon, 18 Apr 2022 14:21:02 +0000</pubDate>
      <link>https://forem.com/lastcoolnameleft/when-how-and-where-to-use-clusterapi-capi-and-clusterapi-for-azure-capz-1lpc</link>
      <guid>https://forem.com/lastcoolnameleft/when-how-and-where-to-use-clusterapi-capi-and-clusterapi-for-azure-capz-1lpc</guid>
      <description>&lt;p&gt;This article explains why, when, and how to use self-managed Kubernetes clusters in Azure for testing custom scenarios.&lt;/p&gt;

&lt;p&gt;Kubernetes has gotten so large and complex that most companies prefer to use the managed service (e.g. AKS) instead of running it themselves. By using a managed Kubernetes service, this frees up the operations team to focus on their core competency instead of optimizing, backing up and upgrading of Kubernetes.&lt;/p&gt;

&lt;p&gt;While this reduces the operational burden, you lose the ability to modify the platform. Sometimes these are acceptable tradeoffs, sometimes you need to manage it yourself.&lt;/p&gt;

&lt;p&gt;Historically, AKS-engine was the OSS tool for creating unmanaged Kubernetes clusters on Azure, but it had some limitations. &lt;a href="https://cloudblogs.microsoft.com/opensource/2020/12/15/introducing-cluster-api-provider-azure-capz-kubernetes-cluster-management/" rel="noopener noreferrer"&gt;CAPI/CAPZ is the go-forward solution&lt;/a&gt; for creating and operating self-managed clusters declaratively.&lt;/p&gt;

&lt;p&gt;I highly recommend reading Scott Lowe’s article on &lt;a href="https://blog.scottlowe.org/2019/08/26/an-introduction-to-kubernetes-cluster-api" rel="noopener noreferrer"&gt;An introduction to CAPI&lt;/a&gt;. It covers a lot of terminology and concepts used here.&lt;/p&gt;

&lt;p&gt;One of the reasons for using CAPI/CAPZ is as a testing and development tool for Kubernetes on Azure. For example, you might need to build and test the following scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A kernel change to the worker nodes&lt;/li&gt;
&lt;li&gt;A modification to the K8S config on control plane nodes&lt;/li&gt;
&lt;li&gt;An installation of a different CNI&lt;/li&gt;
&lt;li&gt;The use of K8S to manage K8S&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This diagram represents a high level architecture of a starter CAPI/CAPZ cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Flastcoolnameleft%2Fkubernetes-examples%2Fmaster%2Fcapi-capz%2Farchitecture.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Flastcoolnameleft%2Fkubernetes-examples%2Fmaster%2Fcapi-capz%2Farchitecture.png" alt="CAPI/CAPZ Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The rest of this article will explain how to implement the above scenarios utilizing the &lt;a href="https://cluster-api.sigs.k8s.io/user/quick-start.html" rel="noopener noreferrer"&gt;CAPI quickstart&lt;/a&gt;. Because the command arguments will change over time, this article will describe the steps and provide a link to the full details like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cluster-api.sigs.k8s.io/user/quick-start.html#generating-the-cluster-configuration" rel="noopener noreferrer"&gt;Link to CAPI Quick Start with details&lt;/a&gt;: &lt;code&gt;base command to run&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the KIND Cluster
&lt;/h2&gt;

&lt;p&gt;Similar to &lt;a href="https://en.wikipedia.org/wiki/RepRap_project" rel="noopener noreferrer"&gt;RepRap&lt;/a&gt;, CAPI uses a Kubernetes cluster to make more Kubernetes clusters. The easiest way is with &lt;a href="https://kind.sigs.k8s.io/" rel="noopener noreferrer"&gt;Kubernetes IN Docker (KIND)&lt;/a&gt;. As the name implies, it’s a Kubernetes cluster which runs as a Docker container. This is our starting point for what we call “Bootstrap Cluster”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cluster-api.sigs.k8s.io/user/quick-start.html#install-andor-configure-a-kubernetes-cluster" rel="noopener noreferrer"&gt;Create Kind Cluster&lt;/a&gt;: &lt;code&gt;kind create cluster&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Initialize cluster for Azure
&lt;/h2&gt;

&lt;p&gt;We will use this bootstrap cluster to initialize the “Management Cluster” which contains all of the CRDs and runs the CAPI controllers. This is where we will apply all of our changes to meet our scenarios.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cluster-api.sigs.k8s.io/user/quick-start.html#initialization-for-common-providers" rel="noopener noreferrer"&gt;Initialize cluster for Azure&lt;/a&gt;: &lt;code&gt;clusterctl init --infrastructure azure&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Generate cluster configuration
&lt;/h2&gt;

&lt;p&gt;Now that our management cluster is ready, we want to define what our workload cluster will look like. Thankfully, there are different flavors we can pick from. By using the default, we will get an unmanaged K8S cluster using virtual machines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cluster-api.sigs.k8s.io/user/quick-start.html#generating-the-cluster-configuration" rel="noopener noreferrer"&gt;Generate cluster configuration&lt;/a&gt;: &lt;code&gt;clusterctl generate cluster capi-quickstart &amp;gt; capi-quickstart.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We now have a file which contains the CRDs which will define our workload cluster. We will modify capi-quickstart.yaml and edit the CRDs to implement each of our scenarios.&lt;/p&gt;

&lt;p&gt;Full documentation is available for &lt;a href="https://doc.crds.dev/github.com/kubernetes-sigs/cluster-api" rel="noopener noreferrer"&gt;CAPI (baseline) CRDs&lt;/a&gt; and &lt;a href="https://doc.crds.dev/github.com/kubernetes-sigs/cluster-api-provider-azure" rel="noopener noreferrer"&gt;CAPZ (Azure specific resources) CRDs&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario: Worker node kernel change
&lt;/h2&gt;

&lt;p&gt;If we want to modify the worker nodes, we likely want to add a preKubeadmCommands and postKubeadmCommands directive in the KubeadmConfigTemplate.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;preKubeadmCommands&lt;/code&gt; allows a list of commands to run on the worker node BEFORE joining the cluster.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;postKubeadmCommands&lt;/code&gt; allows a list of commands to run on the worker node AFTER joining the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: bootstrap.cluster.x-k8s.io/v1alpha4
kind: KubeadmConfigTemplate
metadata:
  name: capi-quickstart-md-0
  namespace: default
spec:
  template:
    spec:
      preKubeadmCommands:
        - wget -P /tmp https://kernel.ubuntu.com/&amp;lt;path&amp;gt;.deb
        - dpkg -i /tmp/&amp;lt;package name&amp;gt;.deb
      postKubeadmCommands:
        - reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After you’ve made these changes, you can proceed to the rest of the steps by applying the resources to your management cluster which will then create your workload cluster and deploy the CNI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario: Modify Kubernetes components
&lt;/h2&gt;

&lt;p&gt;If we want to modify the control plane, we can make changes to the KubeadmControlPlane. This allows us to leverage the kubeadm API to customize various components.&lt;/p&gt;

&lt;p&gt;For example, to enable a Feature Gate on the kube-apiserver:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: controlplane.cluster.x-k8s.io/v1alpha4
kind: KubeadmControlPlane
metadata:
  name: capi-quickstart-control-plane
  namespace: default
spec:
  kubeadmConfigSpec:
    clusterConfiguration:
      apiServer:
        extraArgs:
          feature-gates: MyFeatureGate=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above example omits some fields for brevity. Make sure that you keep any existing args and configurations that you are not modifying in-place.&lt;/p&gt;

&lt;p&gt;After you’ve made these changes, you can proceed to the rest of the steps by applying the resources to your management cluster which will then create your workload cluster and deploy the CNI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Apply the Workload Cluster
&lt;/h2&gt;

&lt;p&gt;Now that we have defined what our cluster should look like, apply the resources to the management cluster. The CAPZ operator will detect the updated resources and talk to Azure Resource Manager.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cluster-api.sigs.k8s.io/user/quick-start.html#apply-the-workload-cluster" rel="noopener noreferrer"&gt;Apply the workload cluster&lt;/a&gt;: &lt;code&gt;kubectl apply -f capi-quickstart.yaml&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitor the Cluster Creation
&lt;/h2&gt;

&lt;p&gt;After you’ve made the changes to the &lt;code&gt;capi-quickstart.yaml&lt;/code&gt; resources and applied them, you’re ready to watch the cluster come up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cluster-api.sigs.k8s.io/user/quick-start.html#accessing-the-workload-cluster" rel="noopener noreferrer"&gt;Watch the cluster creation&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;kubectl get cluster&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;clusterctl describe cluster capi-quickstart&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl get kubeadmcontrolplane&lt;/code&gt; – Verify the Control Plane is up&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that the workload cluster is up and running, it’s time to start using it!&lt;/p&gt;

&lt;h2&gt;
  
  
  Get the Kubeconfig for the Workload Cluster
&lt;/h2&gt;

&lt;p&gt;Now that we’re dealing with two clusters (management cluster in Docker and workload cluster in Azure), we now have two kubeconfig files. For ease, we will save it to the local directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cluster-api.sigs.k8s.io/user/quick-start.html#accessing-the-workload-cluster" rel="noopener noreferrer"&gt;Get the Kubeconfig for the workload cluster&lt;/a&gt;: &lt;code&gt;clusterctl get kubeconfig capi-quickstart &amp;gt; capi-quickstart.kubeconfig&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install the CNI
&lt;/h2&gt;

&lt;p&gt;By default, the workload cluster will not have a CNI and one must be installed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cluster-api.sigs.k8s.io/user/quick-start.html#deploy-a-cni-solution" rel="noopener noreferrer"&gt;Deploy the CNI&lt;/a&gt;: &lt;code&gt;kubectl --kubeconfig=./capi-quickstart.kubeconfig apply -f https://...calico.yaml&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario: Install a different CNI
&lt;/h2&gt;

&lt;p&gt;If you want to use flannel as your CNI, then you can apply the resources to your management cluster which will then create your workload cluster.&lt;/p&gt;

&lt;p&gt;However, instead of &lt;a href="https://cluster-api.sigs.k8s.io/user/quick-start.html#deploy-a-cni-solution" rel="noopener noreferrer"&gt;Deploying the CNI&lt;/a&gt;, you can follow the steps in the &lt;a href="https://capz.sigs.k8s.io/topics/flannel.html" rel="noopener noreferrer"&gt;Install Flannel walkthrough&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cleanup
&lt;/h2&gt;

&lt;p&gt;When you’re done, you can cleanup both the workload and management cluster easily.&lt;/p&gt;

&lt;p&gt;Delete the workload cluster kubectl delete cluster capi-quickstart&lt;/p&gt;

&lt;p&gt;If you want to create the workload cluster again, you can do so by re-applying capi-quickstart.yaml&lt;/p&gt;

&lt;p&gt;Delete the management cluster kind delete cluster&lt;/p&gt;

&lt;p&gt;If you want to create the management cluster again, you must start from scratch. If you delete the management cluster without deleting the workload cluster, then the workload cluster and Azure resources will remain.&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;Similar to how Kubernetes allows you to orchestrate containers using a declarative syntax, CAPI/CAPZ allows you to do the same, but for Kubernetes clusters in Azure.&lt;/p&gt;

&lt;p&gt;This article covered example scenarios for when to use CAPI/CAPZ as well as a walkthrough on how to implement them.&lt;/p&gt;

&lt;p&gt;I’m especially excited for the future of CAPI/CAPZ and how it can integrate with other Cloud Native methodologies like GitOps to declaratively manage clusters.&lt;/p&gt;

&lt;p&gt;P.S. I am extremely grateful to Cecile Robert Michon’s (&lt;a href="https://twitter.com/cecilerobertm" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; &amp;amp; &lt;a href="https://github.com/CecileRobertMichon" rel="noopener noreferrer"&gt;Github&lt;/a&gt;) technical guidance for this article. Without her support, I wouldn’t have gotten this far and definitely would have missed a few key scenarios. Thanks Cecile!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Getting started with Loki and AKS</title>
      <dc:creator>Tommy Falgout</dc:creator>
      <pubDate>Fri, 01 Apr 2022 17:02:55 +0000</pubDate>
      <link>https://forem.com/lastcoolnameleft/getting-started-with-loki-and-aks-522e</link>
      <guid>https://forem.com/lastcoolnameleft/getting-started-with-loki-and-aks-522e</guid>
      <description>&lt;p&gt;Searching through application logs is a critical part of any operations team.  And as the Cloud Native ecosystem grows and evolves, more modern approaches for this use case are emerging.&lt;/p&gt;

&lt;p&gt;The thing about retaining logs is that the storage requirements can get big.  REALLY big.&lt;/p&gt;

&lt;p&gt;One of the most common log search and indexing tools is &lt;a href="https://www.elastic.co/elasticsearch/" rel="noopener noreferrer"&gt;ElasticSearch&lt;/a&gt;.  ElasticSearch is exceptionally good at finding a needle in the haystack (e.g. &lt;code&gt;When did the string "Error message #123" occur in any copy of your application on March 17th&lt;/code&gt;).  It does this by &lt;a href="https://www.elastic.co/blog/what-is-an-elasticsearch-index" rel="noopener noreferrer"&gt;indexing the contents of the log message&lt;/a&gt; which can significantly increase your storage consumption.&lt;/p&gt;

&lt;p&gt;The enthusiastic team at Grafana created &lt;a href="https://grafana.com/oss/loki/" rel="noopener noreferrer"&gt;Loki&lt;/a&gt; to address this problem.  Instead of indexing the full log message, Loki only indexes the metadata (e.g. label, namespace, etc.) of the log, significantly reducing your storage needs.  You can still search for the &lt;a href="https://grafana.com/docs/loki/latest/logql/log_queries/" rel="noopener noreferrer"&gt;content of the log messages with LogQL&lt;/a&gt;, but it's not indexed.&lt;/p&gt;

&lt;p&gt;The UI for Loki is &lt;a href="https://grafana.com/grafana/" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt;, which you might already be familiar with if you're using &lt;a href="https://grafana.com/oss/prometheus/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Getting started with Loki on Azure Kubernetes Service (AKS) is pretty easy.  These instructions are inspired by the official &lt;a href="https://grafana.com/docs/loki/latest/getting-started/" rel="noopener noreferrer"&gt;Loki Getting Started&lt;/a&gt; steps with some modifications streamlined for AKS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Set some starter env-vars
AKS_RG=loki-rg
AKS_LOCATION=southcentralus
AKS_NAME=loki-aks

# Create the AKS cluster
az group create -n $AKS_RG -l $AKS_LOCATION
az aks create -n $AKS_NAME -g $AKS_RG
az aks get-credentials -n $AKS_NAME -g $AKS_RG


# Helm update and install
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

# Create a Helm release of Loki with Grafana + Prometheus using a PVC
# NOTE: This diverges from the Loki docs as it uses storageClassName=default instead of "standard" 
helm upgrade --install loki grafana/loki-stack --namespace grafana --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false,loki.persistence.enabled=true,loki.persistence.storageClassName=default,loki.persistence.size=5Gi

# The Helm installation uses a non-default password for Grafana.  This command fetches it.
# Should look like gtssNbfacGRYZFCa4f3CFmMuendaZzrf9so9VgLh
kubectl get secret loki-grafana -n grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

# Port-forward from the Grafana service (port 80) to your desktop (port 3000)
kubectl port-forward -n grafana svc/loki-grafana 3000:80

# In your browser, go to http://127.0.0.1:3000/
# User: admin
# Password: Output of the "kubectl get secret" command. 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you're ready to start exploring Loki!&lt;/p&gt;

&lt;p&gt;We'll start by using Loki to look at Loki's own logs.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Hover over the "Explore" icon (Looks like a compass)&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3awpbzc0uih6n3i2zx51.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3awpbzc0uih6n3i2zx51.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select "Loki" from the Data Sources menu&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3d0d9lgvoqnco7svdapi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3d0d9lgvoqnco7svdapi.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click "Log Browser", which will open up a panel&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under "1. Select labels to search in", click "app"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under "2. Find values for the selected labels", click "loki"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under "3. Resulting selector", click "Show logs"&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should now have a view of the Loki logs as such: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkivlveljvcrr600f05n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkivlveljvcrr600f05n.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congrats!  You've now created an AKS cluster, deployed Loki and Grafana on it, exposed the Grafana endpoint to your desktop and browsed Loki logs using Loki.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Azure Private Link Service + Load Balancer + AKS Limitation</title>
      <dc:creator>Tommy Falgout</dc:creator>
      <pubDate>Mon, 21 Mar 2022 18:26:43 +0000</pubDate>
      <link>https://forem.com/lastcoolnameleft/azure-private-link-service-load-balancer-aks-limitation-44db</link>
      <guid>https://forem.com/lastcoolnameleft/azure-private-link-service-load-balancer-aks-limitation-44db</guid>
      <description>&lt;p&gt;As a Cloud Solution Architect for Microsoft, I'm privileged to work with some great companies which have unique challenges.&lt;/p&gt;

&lt;p&gt;One of our large partners was migrating their solution from AWS to Azure.  Their configuration exposes 10+ services inside Azure Kubernetes Service (AKS) to their customer inside a different Azure Tenant and Subscription through &lt;a href="https://dev.to/lastcoolnameleft/aks-private-link-service-private-endpoint-2kak"&gt;Private Link Service and Private Endpoints&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Flastcoolnameleft%2Fkubernetes-examples%2Fmaster%2Farchitectures%2Fprivate-link-endpoint%2Fprivate-link-endpoint-multiple.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Flastcoolnameleft%2Fkubernetes-examples%2Fmaster%2Farchitectures%2Fprivate-link-endpoint%2Fprivate-link-endpoint-multiple.png" alt="Multiple Private Link Endpoint"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The issue is that at this time is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A single AKS cluster can only have 1 &lt;a href="https://docs.microsoft.com/en-us/azure/aks/internal-lb" rel="noopener noreferrer"&gt;Internal Standard Load Balancer&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A single &lt;a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits#private-link-limits" rel="noopener noreferrer"&gt;Load Balancer can only have 8 Private Links&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means that they could expose 8 services, but not the rest of them.&lt;/p&gt;

&lt;p&gt;Unfortunately, &lt;a href="https://github.com/Azure/AKS/issues/2174" rel="noopener noreferrer"&gt;the feature to enable Multiple LB's&lt;/a&gt; is not currently available in AKS. &lt;/p&gt;

&lt;p&gt;After talking to other AKS experts, we proposed the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use 1 PLS&lt;/li&gt;
&lt;li&gt;Use 1 LB&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.microsoft.com/en-us/azure/aks/internal-lb#specify-an-ip-address" rel="noopener noreferrer"&gt;Specify the SAME IP ADDRESS&lt;/a&gt; as part of &lt;code&gt;spec.loadBalancerIP&lt;/code&gt; in the Service YAML and use different ports for each service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This allowed them to reduce the number of Private Endpoints, reduce their operational complexity as well as use &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noopener noreferrer"&gt;Kubernetes native Port Mapping&lt;/a&gt; with minimal architectural change.&lt;/p&gt;

&lt;p&gt;We reviewed this with the partner and after some Helm chart + Terraform work, this met their needs swimmingly.&lt;/p&gt;

&lt;p&gt;Mission Accomplished.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Walkthrough of AKS + Private Link Service + Private Endpoint</title>
      <dc:creator>Tommy Falgout</dc:creator>
      <pubDate>Fri, 11 Mar 2022 19:50:56 +0000</pubDate>
      <link>https://forem.com/lastcoolnameleft/aks-private-link-service-private-endpoint-2kak</link>
      <guid>https://forem.com/lastcoolnameleft/aks-private-link-service-private-endpoint-2kak</guid>
      <description>&lt;p&gt;This walkthrough shows how to setup a Private Link Service with an AKS cluster and create a Private Endpoint in a separate Vnet.&lt;/p&gt;

&lt;p&gt;While many tutorials might give you a full ARM template, this is designed as a walkthrough which completely uses the CLI so you can understand what's happening at every step of the process.&lt;/p&gt;

&lt;p&gt;It focuses on an "uninteresting" workload and uses &lt;a href="https://github.com/stefanprodan/podinfo" rel="noopener noreferrer"&gt;podinfo&lt;/a&gt; as the sample app.  This is because it's easy to deploy and customize with a sample Helm chart.&lt;/p&gt;

&lt;p&gt;This is inspired and leans heavily on the Azure Docs for &lt;a href="https://docs.microsoft.com/en-us/azure/private-link/create-private-link-service-cli" rel="noopener noreferrer"&gt;creating a Private Link Service&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you're interested in more samples, check out my &lt;a href="https://github.com/lastcoolnameleft/kubernetes-examples" rel="noopener noreferrer"&gt;Kubernetes Examples repo in GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Flastcoolnameleft%2Fkubernetes-examples%2Fmaster%2Fservice%2Fprivate-link-endpoint-service.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Flastcoolnameleft%2Fkubernetes-examples%2Fmaster%2Fservice%2Fprivate-link-endpoint-service.svg" alt="Private Link Endpoint Service"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/cli/azure/" rel="noopener noreferrer"&gt;Azure CLI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stedolan.github.io/jq/" rel="noopener noreferrer"&gt;jq&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Assumptions
&lt;/h2&gt;

&lt;p&gt;This walkthrough assumes you let Azure create the Vnet when creating the AKS cluster.  If you manually created the Vnet, then the general steps are the same, except you must enter the AKS_MC_VNET, AKS_MC_SUBNET env vars manually.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup Steps
&lt;/h2&gt;

&lt;p&gt;First, create a sample AKS cluster and install Podinfo on it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Set these values
AKS_NAME=
AKS_RG=
LOCATION=

# Create the AKS cluster
az aks create -n $AKS_NAME -g $AKS_RG

# Get the MC Resource Group
AKS_MC_RG=$(az aks show -n $AKS_NAME -g $AKS_RG | jq -r '.nodeResourceGroup')
echo $AKS_MC_RG

# Get the Vnet Name
AKS_MC_VNET=$(az network vnet list -g $AKS_MC_RG | jq -r '.[0].name')
echo $AKS_MC_VNET

AKS_MC_SUBNET=$(az network vnet subnet list -g $AKS_MC_RG --vnet-name $AKS_MC_VNET | jq -r '.[0].name')
echo $AKS_MC_SUBNET

AKS_MC_LB_INTERNAL=kubernetes-internal

AKS_MC_LB_INTERNAL_FE_CONFIG=$(az network lb rule list -g $AKS_MC_RG --lb-name=$AKS_MC_LB_INTERNAL | jq -r '.[0].frontendIpConfiguration.id')
echo $AKS_MC_LB_INTERNAL_FE_CONFIG

# Deploy a sample app using an Internal LB
helm upgrade --install --wait podinfo-internal-lb \
    --set-string service.annotations."service\.beta\.kubernetes\.io\/azure-load-balancer-internal"=true \
    --set service.type=LoadBalancer \
    --set ui.message=podinfo-internal-lb \
    podinfo/podinfo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install Steps - Create the Private Link Service
&lt;/h2&gt;

&lt;p&gt;These steps will be done in the MC_ resource group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Disable the private link service network policies
az network vnet subnet update \
    --name $AKS_MC_SUBNET \
    --resource-group $AKS_MC_RG \
    --vnet-name $AKS_MC_VNET \
    --disable-private-link-service-network-policies true


# Create the PLS
PLS_NAME=aks-pls
az network private-link-service create \
    --resource-group $AKS_MC_RG \
    --name $PLS_NAME \
    --vnet-name $AKS_MC_VNET \
    --subnet $AKS_MC_SUBNET \
    --lb-name $AKS_MC_LB_INTERNAL \
    --lb-frontend-ip-configs $AKS_MC_LB_INTERNAL_FE_CONFIG
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install Steps - Create the Private Endpoint
&lt;/h2&gt;

&lt;p&gt;These steps will be done in our &lt;code&gt;private-endpoint-rg&lt;/code&gt; resource group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PE_RG=private-endpoint-rg
az group create \
    --name $PE_RG \
    --location $LOCATION

PE_VNET=pe-vnet
PE_SUBNET=pe-subnet

az network vnet create \
    --resource-group $PE_RG \
    --name $PE_VNET \
    --address-prefixes 10.0.0.0/16 \
    --subnet-name $PE_SUBNET \
    --subnet-prefixes 10.0.0.0/24

# Disable the private link service network policies
az network vnet subnet update \
    --name $PE_SUBNET \
    --resource-group $PE_RG \
    --vnet-name $PE_VNET \
    --disable-private-endpoint-network-policies true


PE_CONN_NAME=pe-conn
PE_NAME=pe
az network private-endpoint create \
    --connection-name $PE_CONN_NAME \
    --name $PE_NAME \
    --private-connection-resource-id $PLS_ID \
    --resource-group $PE_RG \
    --subnet $PE_SUBNET \
    --manual-request false \
    --vnet-name $PE_VNET

# We need the NIC ID to get the newly created Private IP
PE_NIC_ID=$(az network private-endpoint show -g $PE_RG --name $PE_NAME -o json | jq -r '.networkInterfaces[0].id')
echo $PE_NIC_ID

# Get the Private IP from the NIC
PE_IP=$(az network nic show --ids $PE_NIC_ID -o json | jq -r '.ipConfigurations[0].privateIpAddress')
echo $PE_IP

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Validation Steps - Create a VM
&lt;/h2&gt;

&lt;p&gt;Lastly, validate that this works by creating a VM in the Vnet with the Private Endpoint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;VM_NAME=ubuntu
az vm create \
    --resource-group $PE_RG \
    --name ubuntu \
    --image UbuntuLTS \
    --public-ip-sku Standard \
    --vnet-name $PE_VNET \
    --subnet $PE_SUBNET \
    --admin-username $USER \
    --ssh-key-values ~/.ssh/id_rsa.pub

VM_PIP=$(az vm list-ip-addresses -g $PE_RG -n $VM_NAME | jq -r '.[0].virtualMachine.network.publicIpAddresses[0].ipAddress')
echo $VM_PIP

# SSH into the host
ssh $VM_IP

$ curl &amp;lt;Copy the value from $PE_IP&amp;gt;:9898

# The output should look like:
$ curl 10.0.0.5:9898
{
  "hostname": "podinfo-6ff68cbf88-cxcvv",
  "version": "6.0.3",
  "revision": "",
  "color": "#34577c",
  "logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif",
  "message": "podinfo-internal-lb",
  "goos": "linux",
  "goarch": "amd64",
  "runtime": "go1.16.9",
  "num_goroutine": "9",
  "num_cpu": "2"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Multiple PLS/PE
&lt;/h2&gt;

&lt;p&gt;To test a specific use case, I wanted to create multiple PLS and PE's.  This set of instructions lets you easily loop through and create multiple instances.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# podinfo requires a high numbered port, eg 9000+

SUFFIX=9000
helm upgrade --install --wait podinfo-$SUFFIX \
    --set-string service.annotations."service\.beta\.kubernetes\.io\/azure-load-balancer-internal"=true \
    --set service.type=LoadBalancer \
    --set service.httpPort=$SUFFIX \
    --set service.externalPort=$SUFFIX \
    --set ui.message=podinfo-$SUFFIX \
    podinfo/podinfo

# This might be easier to hard-code
AKS_MC_LB_INTERNAL_FE_CONFIG=$(az network lb rule list -g $AKS_MC_RG --lb-name=$AKS_MC_LB_INTERNAL -o json | jq -r ".[] | select( .backendPort == $SUFFIX) | .frontendIpConfiguration.id")
echo $AKS_MC_LB_INTERNAL_FE_CONFIG

PLS_NAME=aks-pls-$SUFFIX
PE_CONN_NAME=pe-conn-$SUFFIX
PE_NAME=pe-$SUFFIX

az network private-link-service create \
    --resource-group $AKS_MC_RG \
    --name $PLS_NAME \
    --vnet-name $AKS_MC_VNET \
    --subnet $AKS_MC_SUBNET \
    --lb-name $AKS_MC_LB_INTERNAL \
    --lb-frontend-ip-configs $AKS_MC_LB_INTERNAL_FE_CONFIG

PLS_ID=$(az network private-link-service show \
    --name $PLS_NAME \
    --resource-group $AKS_MC_RG \
    --query id \
    --output tsv)
echo $PLS_ID

az network private-endpoint create \
    --connection-name $PE_CONN_NAME \
    --name $PE_NAME \
    --private-connection-resource-id $PLS_ID \
    --resource-group $PE_RG \
    --subnet $PE_SUBNET \
    --manual-request false \
    --vnet-name $PE_VNET

PE_NIC_ID=$(az network private-endpoint show -g $PE_RG --name $PE_NAME -o json | jq -r '.networkInterfaces[0].id')
echo $PE_NIC_ID

PE_IP=$(az network nic show --ids $PE_NIC_ID -o json | jq -r '.ipConfigurations[0].privateIpAddress')
echo $PE_IP

echo "From your Private Endpoint VM run: curl $PE_IP:$SUFFIX"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Hi, I'm Tommy Falgout</title>
      <dc:creator>Tommy Falgout</dc:creator>
      <pubDate>Sat, 14 Jan 2017 16:01:59 +0000</pubDate>
      <link>https://forem.com/lastcoolnameleft/hi-im-tommy-falgout</link>
      <guid>https://forem.com/lastcoolnameleft/hi-im-tommy-falgout</guid>
      <description>&lt;p&gt;I have been coding for 20+ years.&lt;/p&gt;

&lt;p&gt;You can find me on GitHub as &lt;a href="https://github.com/lastcoolnameleft" rel="noopener noreferrer"&gt;lastcoolnameleft&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I live in Plano, TX.&lt;/p&gt;

&lt;p&gt;I work for Microsoft&lt;/p&gt;

&lt;p&gt;Nice to meet you.&lt;/p&gt;

</description>
      <category>introduction</category>
    </item>
  </channel>
</rss>
