<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Florian Lutz</title>
    <description>The latest articles on Forem by Florian Lutz (@florianlutz).</description>
    <link>https://forem.com/florianlutz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/florianlutz"/>
    <language>en</language>
    <item>
      <title>Take control of your AI. In my recent blog post, I show how to run an inference model locally and fully airgapped - no cloud, no third parties, just open source - so your data never leaves your environment. Set the baseline for your local AI Workloads.</title>
      <dc:creator>Florian Lutz</dc:creator>
      <pubDate>Thu, 25 Sep 2025 14:21:26 +0000</pubDate>
      <link>https://forem.com/florianlutz/take-control-of-your-ai-in-my-recent-blog-post-i-show-how-to-run-an-inference-model-locally-and-4ef4</link>
      <guid>https://forem.com/florianlutz/take-control-of-your-ai-in-my-recent-blog-post-i-show-how-to-run-an-inference-model-locally-and-4ef4</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/florianlutz/setting-up-an-airgapped-llm-using-ollama-2il4" class="crayons-story__hidden-navigation-link"&gt;Setting up an airgapped LLM using Ollama&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/florianlutz" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F715712%2F1ff783d8-73e8-461e-9a9c-a4782f14130e.png" alt="florianlutz profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/florianlutz" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Florian Lutz
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Florian Lutz
                
              
              &lt;div id="story-author-preview-content-2642129" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/florianlutz" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F715712%2F1ff783d8-73e8-461e-9a9c-a4782f14130e.png" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Florian Lutz&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/florianlutz/setting-up-an-airgapped-llm-using-ollama-2il4" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Jul 11 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/florianlutz/setting-up-an-airgapped-llm-using-ollama-2il4" id="article-link-2642129"&gt;
          Setting up an airgapped LLM using Ollama
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/airgapped"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;airgapped&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/security"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;security&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/tutorial"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;tutorial&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/florianlutz/setting-up-an-airgapped-llm-using-ollama-2il4" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;3&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/florianlutz/setting-up-an-airgapped-llm-using-ollama-2il4#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            4 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>ai</category>
      <category>airgapped</category>
      <category>security</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Setting up an airgapped LLM using Ollama</title>
      <dc:creator>Florian Lutz</dc:creator>
      <pubDate>Fri, 11 Jul 2025 11:38:23 +0000</pubDate>
      <link>https://forem.com/florianlutz/setting-up-an-airgapped-llm-using-ollama-2il4</link>
      <guid>https://forem.com/florianlutz/setting-up-an-airgapped-llm-using-ollama-2il4</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This article will show, how to set up a local or even air-gapped LLM using Ollama and either Podman or Docker. This can be used as a basis to facilitate Gen AI around restricted or secret data. I'm further using this setting to set up a chat bot, that can search trough secret documents, and a coding assistant, that I can use with a VS Code extension to get development support on secret classified code projects.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Info: all the used podman commands, execpt for the import and export of the volumes are working for docker as well, if you just replace "podman" with "docker".&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cache yourself the desired LLMs
&lt;/h2&gt;

&lt;p&gt;The first step, to setting up a air-gapped LLM is to make the LLM itself accessible for the containers lying in network isolation. For that, I am going to export the volume from a non air-gapped container, that is already set with our desired LLMs, and import it to Podman.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Info: I could of course just assign the same volume to the non restricted and the air-gapped containers, but in my specific case, I even did the export of the volume on a separate machine, just to be sure. The transfer of the exported volume file, I'm sure you can manage without explanation.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To cache the desired LLMs, I'm first going to set up a Ollama Container, using the official Ollama Image form Docker Hub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;podman run -d \
-v ollama:/root/.ollama \
-p 11434:11434 \
--name ollama docker.io/ollama/ollama
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the container is set up, I can run this command for each of the LLMs I'd like to have inside of the air-gapped container. To find these LLM options, I just browsed trough the [Ollama Library].(&lt;a href="https://ollama.com/library" rel="noopener noreferrer"&gt;https://ollama.com/library&lt;/a&gt;).&lt;br&gt;
&lt;code&gt;podman exec -it ollama ollama pull &amp;lt;your desired LLM&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Having pulled all the LLMs I've ever dreamed of, but not too many, since the export and import of the volume takes quite some time, due to its size, I'm now ready to export the Volume and transfer the file to the air-gapped machine:&lt;br&gt;
&lt;code&gt;podman volume export ollama --output ollama.tar&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Create the Volume and restore the cached LLMs
&lt;/h2&gt;

&lt;p&gt;On the air-gapped machine, I can now create a new volume using the following command:&lt;br&gt;
&lt;code&gt;podman volume create ollama&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now all there is left to do, is to bring life (or data) to the newly created volume. For that I'm running:&lt;br&gt;
&lt;code&gt;podman volume import ollama ollama.tar&lt;/code&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Exporting and Importing Volumes is working in Docker as well. This however needs an active license for Docker Desktop. Also I only found a way to do this using the UI of Docker Desktop. I did not find any CLI commands to do this.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd86fjvkcidwa66lgxg0v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd86fjvkcidwa66lgxg0v.png" alt="Importing Volumes in Docker" width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Configure the Network to be internal
&lt;/h2&gt;

&lt;p&gt;To setup the Network around the container to be air-gapped, there are different options. The easiest way to set this up, is to just use the network option &lt;code&gt;--internal&lt;/code&gt;. This keeps the container isolated from the host network, but keeps the option to add other containers within the same network:&lt;br&gt;
&lt;code&gt;podman network create ollama-internal-network --internal&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;However, often you'd like to still reach your container outside of the Podman console interface. This is the case in my use case as well, since I need the container to be reachable by either the VS Code extension as a coding assistant, or some sort of chatbot fronted, that itself I need to reach as well. Because of that I'm creating the network without the internal option, but assign fine tuned firewall rules to the network by using the kernel level rules of &lt;em&gt;iptables&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;So first I create the network:&lt;br&gt;
&lt;code&gt;podman network create ollama-internal-network&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then I retrieve the address range of this specific network:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;networkAddressSpace=$(podman network inspect ollama-internal-network \
--format '{{range .Subnets}}{{.Subnet}}{{end}}')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then I assign the specific firewall rules using &lt;em&gt;iptables&lt;/em&gt;. In my case, I only care about the container not reaching the outside world, so I disable any egress from the network (address space of the network to be precise):&lt;br&gt;
&lt;code&gt;iptables -A OUTPUT -s $networkAddressSpace -j DROP&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This now prevents my container to reach the outside world, by dropping all outgoing packages.&lt;/p&gt;
&lt;h2&gt;
  
  
  Create the internal Docker Container
&lt;/h2&gt;

&lt;p&gt;After I've set up the isolated network for my container, and prepared the volume with the LLMs, I now can setup the container for my use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;podman run -d -v ollama:/root/.ollama \ 
-p 11434:11434 \
--network ollama-internal-network \
--name ollama-internal ollama/ollama
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the correct network assigned, the container now can't reach the outside world, so my data should be safe.&lt;/p&gt;

&lt;h2&gt;
  
  
  Test with a cached LLM
&lt;/h2&gt;

&lt;p&gt;To test, if the volume import and assignment to the container was successful, I send the &lt;code&gt;ollama run &amp;lt;myLLM&amp;gt;&lt;/code&gt; command to the container and check, if the LLM is actually available:&lt;br&gt;
&lt;code&gt;podman exec -it ollama-internal ollama run mistral&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With my Ollama container now being set up in a secure way, I can now either directly throw prompts at the LLMs, that contain restricted information, or I can use this basis to setup a secure Coding Assistant or even a Chatbot that has knowledge about some secret documents and files.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>airgapped</category>
      <category>security</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Azure Machine Learning Workspace secure Networking Setup🛡️🔒🔑</title>
      <dc:creator>Florian Lutz</dc:creator>
      <pubDate>Sat, 11 Jan 2025 09:30:00 +0000</pubDate>
      <link>https://forem.com/florianlutz/how-to-setup-an-azure-machine-learning-workspace-securely-19de</link>
      <guid>https://forem.com/florianlutz/how-to-setup-an-azure-machine-learning-workspace-securely-19de</guid>
      <description>&lt;h1&gt;
  
  
  TLDR
&lt;/h1&gt;

&lt;p&gt;In this post, I'm covering how to set up an Azure ML Workspace in a secure way. The focus is set on networking and integrating the Service with other Azure Services. If you're not interested in the details of this, and just look for a blueprint or template, you can check out &lt;a href="https://github.com/fl-lutz/azure-ml-secure-blueprint?tab=readme-ov-file#azure-ml-secure-blueprint" rel="noopener noreferrer"&gt;this Github Repo&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Table of contents
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;Motivation&lt;/li&gt;
&lt;li&gt;Objective&lt;/li&gt;
&lt;li&gt;Architecture&lt;/li&gt;
&lt;li&gt;Deployment&lt;/li&gt;
&lt;li&gt;Prospect&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Motivation
&lt;/h1&gt;

&lt;p&gt;In a recent project of mine, I've set up an Azure Machine Learning Workspace (AMLW). Upon research, I've realized that there is no easy blueprint combining the setup of a Workspace with secured networking and integration with other Azure services. Additionally, the available documentation was not always 100% clear to me.&lt;/p&gt;

&lt;p&gt;Usually, in these cases, I'd just go ahead and set up the resources in the Azure Portal to understand how they interact and can be integrated. However, in the case of the AMLW, I was not able to achieve the target architecture with the options available in the portal.&lt;/p&gt;

&lt;p&gt;These difficulties led me to post about the secure setup of this resource here. Perhaps some large language models (LLMs) are processing this blog post and provide answers to desperate Cloud Engineers.&lt;/p&gt;

&lt;h1&gt;
  
  
  Objective
&lt;/h1&gt;

&lt;p&gt;The idea of this post is to generalize my learnings with the AMLW and create a blueprint from the secure and integrated setup for others to use. The focus will be on the following topics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Networking integration with other Azure services.&lt;/li&gt;
&lt;li&gt;Configuration of a Managed Identity for the networking setup to function properly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Architecture
&lt;/h1&gt;

&lt;p&gt;The architecture of this blueprint as shown below contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure Machine Learning Workspace (the point of the post)&lt;/li&gt;
&lt;li&gt;Storage Account (essential resource to AMLW)&lt;/li&gt;
&lt;li&gt;Container Registry (essential resource to AMLW)&lt;/li&gt;
&lt;li&gt;Key Vault (essential resource to AMLW)&lt;/li&gt;
&lt;li&gt;Solution Vnet (to secure all Azure Services)&lt;/li&gt;
&lt;li&gt;OpenAI Workspace (the Azure Service to integrate with AMLW)&lt;/li&gt;
&lt;li&gt;Application Insights&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Storage Account, Container Registry, and Key Vault are essential resources required for the deployment of the AMLW. These resources are integrated into both the Solution Virtual Network (VNet) and the Workspace Managed VNet. The OpenAI Workspace is utilized in this architecture to demonstrate how the AMLW can be integrated with various other Platform as a Service (PaaS) offerings on Azure. Additionally, the Application Insights instance and the Jumphost Virtual Machine (VM) serve as supporting resources to facilitate network access and enhance observability of the solution. The Jumphost VM is accessible via Azure Bastion. Aside from the VM and the AMLW, all networking integrations are configured using Private Endpoints.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8hawzu582gtvruatzv8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8hawzu582gtvruatzv8.png" alt="Image description" width="537" height="694"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Deployment
&lt;/h1&gt;

&lt;p&gt;The Deployment of the resources will be done using a bicep Resource Group deployment. For that, a Resource Group needs to be set up.&lt;/p&gt;
&lt;h3&gt;
  
  
  Deploy Resource Group
&lt;/h3&gt;

&lt;p&gt;Setting up the Resource Group using the Azure CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$rgName = "ml-secure-blueprint"
$location = "germanywestcentral"
az group create --name $rgName --location $location
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Details of the Azure Machine Learning Workspace
&lt;/h3&gt;

&lt;p&gt;In the bicep module of the AMLW, along with standard properties such as &lt;em&gt;name&lt;/em&gt;, &lt;em&gt;location&lt;/em&gt;, &lt;em&gt;identity&lt;/em&gt;, and the &lt;em&gt;resource IDs&lt;/em&gt; for essential services like Key Vault, Storage Account, and Container Registry, the most intriguing component is the &lt;code&gt;managedNetwork&lt;/code&gt; object. This object primarily includes configuration for the AMLW's networking options. In this instance, I have set &lt;code&gt;AllowInternetOutbound&lt;/code&gt; to ensure that outbound traffic is not restricted.&lt;/p&gt;

&lt;p&gt;For integration with other Azure services, you can define custom outbound rule objects. In my case, I named the rule &lt;code&gt;allowOpenAi&lt;/code&gt;, but you can choose any name you prefer. The type must be set to &lt;code&gt;PrivateEndpoint&lt;/code&gt;, and within the &lt;code&gt;destination&lt;/code&gt; object, you can specify your target Resource ID and the subresource target, which you can find on &lt;a href="https://learn.microsoft.com/en-us/azure/private-link/private-endpoint-dns" rel="noopener noreferrer"&gt;this site&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource machineLearningWorkspace 'Microsoft.MachineLearningServices/workspaces@2024-07-01-preview' = {
  name: machineLearningWorkspaceName
  location: location
  identity: {
    type: 'UserAssigned'
    userAssignedIdentities: {
      '${managedIdentityId}': {}
    }
  }
  properties: {
    applicationInsights: applicationInsightsId
    storageAccount: storageAccountId
    containerRegistry: containerRegistryId
    keyVault: keyVaultId
    imageBuildCompute: 'cpu-compute'
    primaryUserAssignedIdentity: managedIdentityId
    publicNetworkAccess: 'Disabled'
    managedNetwork: {
      isolationMode: 'AllowInternetOutbound'
      outboundRules: {
        allowOpenAi: {
          type: 'PrivateEndpoint'
          destination: {
            serviceResourceId: openAiWorkspaceId
            sparkEnabled: true
            subresourceTarget: 'account'
          }
        }
      }
    }
  }
  sku: {
    name: 'Basic'
    tier: 'Basic'
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configuration of the Managed Identity
&lt;/h3&gt;

&lt;p&gt;The configuration of the Managed Identity of the AMLW to set up the network details correctly is shown in the snippet below. The important part is that the Identity has the Azure AI Enterprise Network Connection Approver Role assigned in a context where it is authorized not only to all the essential resources of the AMLW but also to all resources that you want to integrate the AMLW with. I chose to assign this role to the entire resource group because only these resources are within this resource group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@description('The managed identity name.')
param managedIdentityName string

@description('The main location.')
param location string

@description('The network connection approver role definition id.')
var networkConnectionApproverRoleDefinitionId = 'b556d68e-0be0-4f35-a333-ad7ee1ce17ea'

resource managedIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities@2023-01-31' = {
  name: managedIdentityName
  location: location
}

resource connectionApproverAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  name: guid(resourceGroup().id, networkConnectionApproverRoleDefinitionId, managedIdentity.name)
  properties: {
    principalId: managedIdentity.properties.principalId
    roleDefinitionId: resourceId('Microsoft.Authorization/roleDefinitions', networkConnectionApproverRoleDefinitionId)
    principalType: 'ServicePrincipal'
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploy Bicep Script Resources
&lt;/h3&gt;

&lt;p&gt;The deployment is being created using the Azure CLI. Besides setting the password for the Jumphost VM, you can configure several parameters in the &lt;code&gt;app-parameters.json&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$vmPassword = "&amp;lt;your password here&amp;gt;"
az deployment group create `
    --resource-group $rgName `
    --template-file app-infrastructure.bicep `
    --parameters @app-parameters.json `
    --parameters "vmAdminPassword=$vmPassword" `
    --name $rgName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The parameters file might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "instance": {
            "value": "dev"
        },
        "prefix": {
            "value": "blueprint"
        },
        "location": {
            "value": "westeurope"
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploy the ML Workspace Network
&lt;/h3&gt;

&lt;p&gt;To complete the deployment, you need to initiate the network deployment for the AMLW. This step is necessary because, as explained &lt;a href="https://docs.azure.cn/en-us/machine-learning/tutorial-create-secure-workspace?view=azureml-api-2#connect-to-studio" rel="noopener noreferrer"&gt;here&lt;/a&gt;, the managed virtual network for an AMLW is not automatically created during its initial deployment, it is provisioned only when required. To ensure that the managed network is deployed, simply execute the script shown below to force its creation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$subscriptionId = "&amp;lt;your Subscription ID&amp;gt;"
$mlWorkspaceName = "&amp;lt;name of your ML Workspace&amp;gt;"
az ml workspace provision-network `
    --subscription $subscriptionId `
    --resource-group $rgName `
    --name $mlWorkspaceName 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this script finishes, the Private Endpoints will be automatically set up for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key Vault&lt;/li&gt;
&lt;li&gt;Container Registry&lt;/li&gt;
&lt;li&gt;Storage Account&lt;/li&gt;
&lt;li&gt;OpenAI Workspace&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can verify their status in the Azure Portal. From the AMLW, the OpenAi Workspace Private Endpoint should appear like this:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6cm9g24cbt0ow8g091m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6cm9g24cbt0ow8g091m.png" alt="PE on ML Studio View" width="800" height="91"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And on the OpenAI Workspace you should find the following configuration:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ihx63gsbgy8aymciaw1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ihx63gsbgy8aymciaw1.png" alt="PE on OpenAI Workspace" width="800" height="104"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Prospect
&lt;/h1&gt;

&lt;p&gt;With your baseline networking setup established, you can now enhance your project by adding additional capabilities.&lt;/p&gt;

&lt;p&gt;One example could be implementing an Ingress solution. Depending on your use case and whether you wish to provide public access, this can be achieved using Azure Front Door or Application Gateway in conjunction with API Management. If you intend to grant access to users within your organization, you may consider setting up an ExpressRoute or a Site-to-Site VPN connection to your Solution VNet.&lt;/p&gt;

&lt;p&gt;Another capability you might need is a deployment agent. For more information on this, you can refer to one of my &lt;a href="https://dev.to/florianlutz/use-container-app-jobs-for-scaling-devops-build-agents-7j6"&gt;previous posts here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;PS: I've written this post without any AI Tools besides from spell checking and improving my wording.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>ai</category>
      <category>bicep</category>
      <category>networking</category>
    </item>
    <item>
      <title>Use Container App Jobs for scaling Devops Build Agents</title>
      <dc:creator>Florian Lutz</dc:creator>
      <pubDate>Wed, 24 May 2023 14:54:00 +0000</pubDate>
      <link>https://forem.com/florianlutz/use-container-app-jobs-for-scaling-devops-build-agents-7j6</link>
      <guid>https://forem.com/florianlutz/use-container-app-jobs-for-scaling-devops-build-agents-7j6</guid>
      <description>&lt;p&gt;Setting up Build Agents in Projects, where from different Reasons (networking, capacity, cost, ...) hosted Agents are no option has been quite unpleasing for some time. But now, with the Container App Jobs, this can be done quite easily and with the scaling that we all wanted.&lt;/p&gt;

&lt;p&gt;First, a short explanation of why Container Apps Jobs was needed to achieve this. The standard Container Apps supported scaling up with the KEDA Rule so it was not a problem to provision enough Revisions according to the Build Agent Queue. The Issue came up as soon as the first Builds finished. Scaling down Replicas was some kind of random, as soon as the Queue was empty and not all Replicas had a Job, any Replica could get removed, so often a running Container got shut down and therefore the scaling option was not usable. With that said, Container App Jobs solves this issue.&lt;/p&gt;

&lt;h1&gt;
  
  
  Preparing your Container
&lt;/h1&gt;

&lt;p&gt;To Prepare your container, you can follow &lt;a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops" rel="noopener noreferrer"&gt;this guide &lt;/a&gt;. This article will only cover deploying the Build Agents from an Image that is already in a Container Registry.&lt;/p&gt;

&lt;h1&gt;
  
  
  Create a Build Agent Pool and a Personal Access Token
&lt;/h1&gt;

&lt;p&gt;To create a Build Agent Pool navigate to Organization Settings (if you don't have the Permissions to do it on Organization Level just jump into the Settings of a Project) in DevOps, there in the Category Pipelines&amp;gt;Agent Pools click on "Add pool" and select "Self-hosted" as Pool Type. You can also just add the Agents to an existing Agent Pool of type "Self-hosted". &lt;/p&gt;

&lt;p&gt;If you now open your Agent Pool (this time on Organization Level, it's important) you can see the Agent Pool Id in the URL:&lt;br&gt;
&lt;a href="https://dev.azure.com/" rel="noopener noreferrer"&gt;https://dev.azure.com/&lt;/a&gt;/_settings/agentpools?poolId=&amp;lt;&lt;strong&gt;poolId&lt;/strong&gt;&amp;gt;&amp;amp;view=jobs&lt;br&gt;
Note that Id somewhere.&lt;/p&gt;

&lt;p&gt;Next step is to create a Personal Access Token, which the Containers can use to authenticate against the Devops API to register themselves as Build Agents. Therefore click on User Settings (top right in Devops) and jump to Personal Access Tokens. The Token should have permission to Read &amp;amp; Manage Agent Pools. Generate the PAT and store it somewhere save (e.g. Password Manager).&lt;/p&gt;
&lt;h1&gt;
  
  
  Setup a Container Apps Environment
&lt;/h1&gt;

&lt;p&gt;If you already have one, you can use an existing one. A very handy fact about the Container Apps Jobs is that high-level Infrastructure Configuration mostly concerns the Container Apps Environment, therefore if you have an Environment that is already integrated into your networking infrastructure, just use this one to deploy your Container Apps Jobs, and they will automatically be integrated into the Network.&lt;/p&gt;

&lt;p&gt;For this article, we're going to stick to a simple Environment with no specific configuration. Because the Container Apps Jobs are in Public Preview and only available with CLI or ARM I am going to stick with Azure CLI. Depending on if you run the CLI Commands in Powershell or Bash, you might need to replace the ""(powershell) with "\"(bash).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az containerapp env create \
    --name "&amp;lt;name&amp;gt;" \
    --location "westeurope" \
    --resource-group "&amp;lt;rg-name&amp;gt;" \
    --logs-workspace-id "&amp;lt;workspace-id&amp;gt;" \
    --logs-workspace-key "&amp;lt;workspace-key&amp;gt;"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can leave off the workspace id and key, then it will provision you a Log Analytics workspace automatically. Just specify it if you already got one to avoid duplication. You can get the Key with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az monitor log-analytics workspace get-shared-keys \
   --name "&amp;lt;law-name&amp;gt;" \
   --resource-group "&amp;lt;rg-name&amp;gt;"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Deploy the Container App Job
&lt;/h1&gt;

&lt;p&gt;Now you can deploy the Container App Job like that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az containerapp job create \
    --name "&amp;lt;name&amp;gt;" \
    --resource-group "&amp;lt;rg-name&amp;gt;" \
    --environment "&amp;lt;containerappenvironment-name&amp;gt;" \
    --trigger-type "Event" \
    --replica-timeout 7200 \  #timeout in seconds
    --replica-retry-limit 0 \
    --replica-completion-count 1 \
    --parallelism 1 \
    --polling-interval 1 \
    --image "&amp;lt;path to your image&amp;gt;" \
    --registry-server "&amp;lt;your container registry server&amp;gt;" \
    --registry-username "&amp;lt;cr-username&amp;gt;" \
    --registry-password "&amp;lt;cr-password&amp;gt;" \
    --cpu "0.25" --memory "0.5Gi" \
    --scale-rule-name "azure-pipelines" \
    --scale-rule-type "azure-pipelines" \
    --scale-rule-metadata "poolID=&amp;lt;poolId&amp;gt;" \
                          "organizationURLFromEnv=AZP_URL" \
                          "personalAccessTokenFromEnv=AZP_TOKEN" \
                          "poolName=AZP_POOL" \
    --secrets "azp-pool=&amp;lt;yourAgentPoolName&amp;gt;" \
              "azp-token=&amp;lt;yourPatToken&amp;gt;" \
              "azp-url=&amp;lt;yourDevopsOrgUrl&amp;gt;" \
    --env-vars "AZP_POOL=secretref:azp-pool" \
               "AZP_TOKEN=secretref:azp-token" \
               "AZP_URL=secretref:azp-url"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is first a lot, but we'll go through the important things.&lt;/p&gt;

&lt;p&gt;First of all, is the "replica-timeout". This needs to be balanced so you don't overspend on stuck build agents, but your pipeline runs also need to have enough time to finish. Just put a bit longer than your longest expected run in there. One tip, since we now have this awesome option: Create a Pool for short-running tasks and one for the long ones. Since they really scale to zero it might save some money.&lt;/p&gt;

&lt;p&gt;The Polling interval might also be interesting, there you also have to find the balance between faster scaling -&amp;gt; shorter Queue Times, or lower cost -&amp;gt; fewer retries.&lt;/p&gt;

&lt;p&gt;Then to configure the KEDA scaling you need to create Secrets for azp-pool, azp-token and azp-url. These you need to reference with the &lt;code&gt;secretref:&lt;/code&gt; notation in your environment variables and pass those into the scale-rule-metadata. This kind of configuration does not look very pleasing but for a public preview, this will do.&lt;/p&gt;

&lt;p&gt;Now all there is left to say is thank god for the option of using Build Agent Containers that don't require yourself to manage a K8s Cluster or draw carts of money to Microsoft because of missing scaling capabilities.&lt;/p&gt;

&lt;p&gt;Edit:&lt;br&gt;
You can now also find a template to deploy the Container Apps Jobs Build Agents using Powershell and Bicep. Check out &lt;a href="https://github.com/fl-lutz/container_apps_jobs_build_agent/tree/main" rel="noopener noreferrer"&gt;this Link&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>containerapps</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Advantages of Azure Powershell over Azure CLI</title>
      <dc:creator>Florian Lutz</dc:creator>
      <pubDate>Wed, 21 Dec 2022 07:13:02 +0000</pubDate>
      <link>https://forem.com/florianlutz/advantages-of-azure-powershell-over-azure-cli-m9l</link>
      <guid>https://forem.com/florianlutz/advantages-of-azure-powershell-over-azure-cli-m9l</guid>
      <description>&lt;p&gt;Azure PowerShell and Azure CLI are both powerful tools that can be used to manage resources in the Azure cloud platform. While both tools have their own unique advantages, Azure PowerShell has several key benefits that make it a popular choice among Azure administrators.&lt;/p&gt;

&lt;p&gt;One of the main advantages of Azure PowerShell is its ability to provide deep integration with the Azure platform. Azure PowerShell includes a wide range of cmdlets (command-line functions) that can be used to perform almost any task in Azure. This includes everything from creating and managing virtual machines to configuring networking and security.&lt;/p&gt;

&lt;p&gt;Another advantage of Azure PowerShell is its support for automation. Azure administrators can use Azure PowerShell to write scripts that automate complex tasks, which can save time and reduce the risk of errors. Azure PowerShell scripts can also be run on a schedule, making it easy to automate routine maintenance tasks.&lt;/p&gt;

&lt;p&gt;In addition to automation, Azure PowerShell also offers advanced features such as the ability to work with Azure Resource Manager templates. These templates allow administrators to define and deploy complex Azure environments using a simple JSON-based syntax. This can be particularly useful for organizations that need to deploy and manage multiple Azure resources in a consistent and repeatable way.&lt;/p&gt;

&lt;p&gt;Finally, Azure PowerShell has a large and active community of users, which means there is a wealth of documentation, examples, and support available online. This can make it easier for administrators to learn and use Azure PowerShell, and to get help when they run into issues.&lt;/p&gt;

&lt;p&gt;While Azure CLI is also a useful tool for managing Azure resources, it does not offer the same level of integration or advanced features as Azure PowerShell. For these reasons, many Azure administrators prefer to use Azure PowerShell for their day-to-day management tasks.&lt;/p&gt;

&lt;p&gt;In conclusion, Azure PowerShell is a powerful and versatile tool that offers many advantages for managing Azure resources. Its deep integration with the Azure platform, support for automation, and advanced features make it a popular choice among Azure administrators.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>azurecli</category>
      <category>powershell</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>Pulumi - Setting up a Pulumi Stack per Environment 🏗️</title>
      <dc:creator>Florian Lutz</dc:creator>
      <pubDate>Tue, 22 Nov 2022 14:14:48 +0000</pubDate>
      <link>https://forem.com/florianlutz/how-to-pulumi-setting-up-a-pulumi-stack-per-environment-4mbe</link>
      <guid>https://forem.com/florianlutz/how-to-pulumi-setting-up-a-pulumi-stack-per-environment-4mbe</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This part of the series is to help you to recreate your project structure in Pulumi. Therefore, you need to create a Pulumi Stack for each of your project environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Stack?
&lt;/h2&gt;

&lt;p&gt;A Stack in Pulumi is an independent configurable instance of a Pulumi program. On the initial creation of a Pulumi Project, it will automatically create the first Stack for you. Each Stack has its own Statefile containing the Resources of the Stack and can be deployed independent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why should I have a Stack per environment,
&lt;/h2&gt;

&lt;p&gt;With having a Stack per environment, you gain the advantage of having separate Statefiles and separate configuration files. It enabled you to use the configuration to handle differences of your resources between your environments. Examples for this would be the naming convention, different resource locations or Business Continuity settings to save cost in the development environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up the Pulumi Project
&lt;/h2&gt;

&lt;p&gt;If you already have a Pulumi Project set up, just skip this chapter and check out how to add new Stacks in your project.&lt;/p&gt;

&lt;p&gt;To set up the Pulumi Project with the Azure C# configuration I am using, after logging into Azure CLI and connecting to the Pulumi Backend type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pulumi new azure-csharp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As additional information you now have to enter the Project Name, a Stack name, a Azure Location and a Passphrase. The Passphrase is used to decrypt Secrets in the Statefile.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding new Stacks to the Pulumi project
&lt;/h2&gt;

&lt;p&gt;To add new Stacks to your Pulumi project, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pulumi stack init &amp;lt;stackname&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Repeat this command for each of your project environments. This will create a Pulumi..yaml File for each Stack. The .yaml File is containing the configuration for a stack. You can add specific Key-Value Pairs to the configuration file per stack. The config file has the following format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;encryptionsalt: &amp;lt;secretValue&amp;gt;
config:
  azure-native:location: &amp;lt;azureLocation&amp;gt;
  &amp;lt;pulumiProjectName&amp;gt;:&amp;lt;Key&amp;gt; &amp;lt;Value&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Using Key-Value Pairs from configuration files
&lt;/h2&gt;

&lt;p&gt;On top of your code, define a new Pulumi configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var config = new Pulumi.Config();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This var can later be used to Get or Require Secrets from the configuration file. For Key-Value Pairs that cannot be null, you can use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var valueOfKey = config.Require("&amp;lt;Key&amp;gt;");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are using values, that can be absent on another Stack use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var valueOfKey = config.Get("&amp;lt;Key&amp;gt;");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Be sure to handle any Null-Exceptions in that case.&lt;/p&gt;

&lt;p&gt;With this, you have learned how to create Stacks for your environments and also how to use the configuration files for any differences in between your resources.&lt;/p&gt;

</description>
      <category>pulumi</category>
      <category>azure</category>
      <category>dotnet</category>
      <category>iac</category>
    </item>
    <item>
      <title>Pulumi - Setting up a self-managed Backend 🚀</title>
      <dc:creator>Florian Lutz</dc:creator>
      <pubDate>Tue, 22 Nov 2022 14:13:43 +0000</pubDate>
      <link>https://forem.com/florianlutz/how-to-pulumi-setting-up-a-self-managed-backend-1o0h</link>
      <guid>https://forem.com/florianlutz/how-to-pulumi-setting-up-a-self-managed-backend-1o0h</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This post series is to help you into the world of stateful Infrastructure as Code with Pulumi. We're going to use Azure Native as resource provider to provision our desired infrastructure. For the Azure Cloud, it is also possible to use the Azure Classic package, but Azure Native usually gets updated more quickly on changes in the Azure Cloud.&lt;/p&gt;

&lt;p&gt;The aim of this post is to prepare yourself a self-managed backend that will bring you up to speed in your project.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to know about Pulumi
&lt;/h2&gt;

&lt;p&gt;Pulumi as an Infrastructure as Code tool provides stateful deployments. That means it creates a to-do list for every deployment by comparing what is deployed in the cloud and what is specified in the code. This is handled via a Statefile. In this file Pulumi is documenting deployed and imported cloud resources. The resources are noted down as JSON objects, including some of their configuration but mainly their Resource Id.&lt;/p&gt;

&lt;p&gt;The handling of the Statefile for Pulumi can happen in multiple options. The simplest option is to just have the Statefile located in the file system of your device. This is set up very fast and useful for quick tests around cloud infrastructure, but this option does not allow team collaboration.&lt;/p&gt;

&lt;p&gt;Alternatives are to use the Pulumi Cloud to handle your Statefile or to create a self-managed backend for your Statefile. This can be done by connecting Pulumi to the storage resource of your Cloud and is what we want to achieve here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;For the self-managed backend, you need to set up the following things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install Pulumi&lt;/li&gt;
&lt;li&gt;Create a Storage Account&lt;/li&gt;
&lt;li&gt;Create a empty directory&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Preparing the Storage Account
&lt;/h2&gt;

&lt;p&gt;To use the Storage Account, &lt;strong&gt;you need to create a Blob Container&lt;/strong&gt;, that is later used to store the State File(s) of your Infrastructure. In my case, the Container is just called &lt;code&gt;Pulumi&lt;/code&gt;. An additional that is optional but recommended is to encrypt the Storage Account with your own key. Therefore, you have to create a Key Vault and follow the Steps &lt;a href="https://learn.microsoft.com/en-us/azure/storage/common/customer-managed-keys-overview" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Login with Azure CLI
&lt;/h2&gt;

&lt;p&gt;The next step is to log in with the Azure CLI. This is not necessary for setting up the self-managed Pulumi Backend, but still will be handy, since connecting Pulumi to the Storage Account will require the Key of the Storage account.&lt;br&gt;
Use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az login --tenant xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or with MFA required:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az login --tenant xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --use-device-code
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And make sure you are using the correct Subscription:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az account set --subscripion xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One hint for any international Readers, by e.g. the Azure China cloud with the CLI Pulumi will also aim for China.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup PowerShell Variables and Environment Variables
&lt;/h2&gt;

&lt;p&gt;Out of just being used to it, I go with a mix of the Azure CLI and PowerShell for the commands, of course you can go with bash as well. For Azure PowerShell fans, I've got bad news here, Pulumi only does support the Azure CLI. But fear not - as soon as you got your CI/CD Pipelines set up, you don't have to touch it anymore.&lt;/p&gt;

&lt;p&gt;For the Pulumi Login, you need the following Variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$storageAccountName=&amp;lt;storageAccountName&amp;gt;
$containerPath=&amp;lt;containerPath&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These are used for Pulumi to Identify your storage account. Additionally Pulumi needs the following Environment Variables setup to login into your backend:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$env:AZURE_STORAGE_ACCOUNT=$storageAccountName
$env:AZURE_STORAGE_KEY=(az storage account keys list --account-name $storageAccountName | ConvertFrom-Json)[0].value
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I am using the Azure CLI to avoid copying the Storage Account Key into my Console. With the Variable setup, we can now login with Pulumi.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pulumi Login
&lt;/h2&gt;

&lt;p&gt;The Pulumi Login can be done with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pulumi login azblob://$containerPath
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Congratulations, now your Device is connected to a Storage Account and Pulumi is able to either create, or access any Statefiles that are located in the specified Container.&lt;/p&gt;

&lt;p&gt;One hint for those of you wondering why you had to log in with the Azure CLI and still provide the Storage Account Key. Accessing the Storage Account via RBAC is not supported for now.&lt;/p&gt;

</description>
      <category>gratitude</category>
    </item>
  </channel>
</rss>
