<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: CloudQuill</title>
    <description>The latest articles on Forem by CloudQuill (@cloudquill).</description>
    <link>https://forem.com/cloudquill</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/cloudquill"/>
    <language>en</language>
    <item>
      <title>Modernizing Legacy Workloads: KubeVirt on AKS with Azure Arc Identity</title>
      <dc:creator>CloudQuill</dc:creator>
      <pubDate>Mon, 01 Dec 2025 09:39:40 +0000</pubDate>
      <link>https://forem.com/cloudquill/modernizing-legacy-workloads-kubevirt-on-aks-with-azure-arc-identity-2ad0</link>
      <guid>https://forem.com/cloudquill/modernizing-legacy-workloads-kubevirt-on-aks-with-azure-arc-identity-2ad0</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; A production-grade blueprint for running Virtual Machines on Azure Kubernetes Service (AKS). This project demonstrates how to unify container and VM operations while solving the "Identity Gap" using Azure Arc—enabling true Azure AD SSH authentication with zero manual key management.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://github.com/ykbytes/aks-kubevirt-arc-unilab" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;View the Complete Project on GitHub&lt;/a&gt;
&lt;/p&gt;




&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;The Problem: Operational Fragmentation&lt;/li&gt;
&lt;li&gt;What is KubeVirt?&lt;/li&gt;
&lt;li&gt;Architecture Overview&lt;/li&gt;
&lt;li&gt;The Identity Challenge: No IMDS&lt;/li&gt;
&lt;li&gt;Multi-Tenancy &amp;amp; Security&lt;/li&gt;
&lt;li&gt;Implementation Deep Dive&lt;/li&gt;
&lt;li&gt;Deployment Guide&lt;/li&gt;
&lt;li&gt;Technologies &amp;amp; Skills Demonstrated&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  The Problem: Operational Fragmentation {#the-problem}
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Reality of Enterprise IT
&lt;/h3&gt;

&lt;p&gt;Here's a truth nobody talks about at cloud conferences: most enterprises aren't running everything in containers. They're not even close.&lt;/p&gt;

&lt;p&gt;While we celebrate microservices and Kubernetes, the reality on the ground looks different. Organizations still depend heavily on Virtual Machines for their most critical workloads:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Legacy Databases&lt;/strong&gt; like Oracle and SQL Server that would require months of refactoring to containerize properly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proprietary Software&lt;/strong&gt; with licensing tied to specific OS configurations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance-bound Workloads&lt;/strong&gt; that regulators insist must run in isolated VMs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lift-and-Shift Migrations&lt;/strong&gt; that moved to the cloud but never got modernized&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This isn't a failure—it's pragmatism. These VMs run the systems that actually make money.&lt;/p&gt;

&lt;h3&gt;
  
  
  The "Two-Stack Problem"
&lt;/h3&gt;

&lt;p&gt;But here's where things get messy. Organizations end up managing two completely separate infrastructure stacks:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Container Stack&lt;/th&gt;
&lt;th&gt;VM Stack&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Orchestration&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes&lt;/td&gt;
&lt;td&gt;vSphere, Hyper-V, Azure VMs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CI/CD Pipeline&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;ArgoCD, Flux, Jenkins&lt;/td&gt;
&lt;td&gt;Separate scripts, manual deployment&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Monitoring&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Prometheus, Grafana&lt;/td&gt;
&lt;td&gt;vRealize, SCOM, Azure Monitor&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Networking&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;CNI (Calico, Cilium)&lt;/td&gt;
&lt;td&gt;NSX, Azure VNet&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Access Control&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes RBAC&lt;/td&gt;
&lt;td&gt;AD Groups, SSH Keys&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;
  The Hidden Costs of Two-Stack Operations
  &lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Double the tooling costs&lt;/strong&gt; in licenses, training, and maintenance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context switching&lt;/strong&gt; that tanks developer productivity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security gaps&lt;/strong&gt; where the two stacks meet&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Siloed teams&lt;/strong&gt; who don't share knowledge or practices&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inconsistent policies&lt;/strong&gt; that create compliance headaches
&lt;/li&gt;
&lt;/ul&gt;



&lt;/p&gt;
&lt;h3&gt;
  
  
  The Solution: Unified Operations with KubeVirt
&lt;/h3&gt;

&lt;p&gt;What if you could run your VMs on the same platform as your containers?&lt;/p&gt;

&lt;p&gt;This is exactly what KubeVirt enables. By treating VMs as Kubernetes objects, you collapse two stacks into one:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;One Pipeline:&lt;/strong&gt; Deploy VMs with the same GitOps workflows as your microservices&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One Monitoring Stack:&lt;/strong&gt; Prometheus and Grafana for everything&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One Access Model:&lt;/strong&gt; Kubernetes RBAC governs who can create, start, and stop VMs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One Team:&lt;/strong&gt; Platform engineers manage the whole thing&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  What is KubeVirt? VMs as Kubernetes Objects {#what-is-kubevirt}
&lt;/h2&gt;
&lt;h3&gt;
  
  
  The Core Idea
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;KubeVirt&lt;/strong&gt; is a Kubernetes add-on that lets you run traditional Virtual Machines alongside containers. It extends the Kubernetes API with VM-specific resources like &lt;code&gt;VirtualMachine&lt;/code&gt;, &lt;code&gt;VirtualMachineInstance&lt;/code&gt;, and &lt;code&gt;DataVolume&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important distinction:&lt;/strong&gt; KubeVirt doesn't emulate or containerize your VM. It runs a &lt;em&gt;real&lt;/em&gt; KVM/QEMU hypervisor inside a Kubernetes Pod. The guest OS is a full, unmodified Linux or Windows installation.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  How It Works Under the Hood
&lt;/h3&gt;



&lt;p&gt;
  Component Breakdown
  &lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;virt-api&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Extends Kubernetes API to handle &lt;code&gt;VirtualMachine&lt;/code&gt; resources&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;virt-controller&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Manages VM lifecycle (create, start, stop, migrate)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;virt-handler&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;DaemonSet on each node; interfaces with libvirt/QEMU&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;virt-launcher&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Pod that hosts the actual VM; one per running VM&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CDI (Containerized Data Importer)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Handles VM disk image imports from HTTP, S3, or registries&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;/p&gt;

&lt;h3&gt;
  
  
  VM Lifecycle in Kubernetes
&lt;/h3&gt;

&lt;p&gt;A KubeVirt VM follows a familiar Kubernetes pattern:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubevirt.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VirtualMachine&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-ubuntu-vm&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;student-labs&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;running&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;domain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;cores&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;guest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;4Gi&lt;/span&gt;
        &lt;span class="na"&gt;devices&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;disks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rootdisk&lt;/span&gt;
              &lt;span class="na"&gt;disk&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;bus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;virtio&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rootdisk&lt;/span&gt;
          &lt;span class="na"&gt;dataVolume&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-ubuntu-vm-rootdisk&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;running: true&lt;/code&gt; field is the desired state—the controller makes sure reality matches. DataVolumes handle disk provisioning, and the VM gets scheduled just like any other Pod, respecting taints, tolerations, and affinity rules.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture Overview {#architecture}
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What We're Building
&lt;/h3&gt;

&lt;p&gt;This project implements a &lt;strong&gt;multi-tenant university lab platform&lt;/strong&gt; with three user types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Faculty&lt;/strong&gt; from the Computer Science department running research VMs with generous resources&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Students&lt;/strong&gt; running lab VMs with strict quotas to prevent abuse&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IT Administrators&lt;/strong&gt; with full platform control&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Node Pools Configuration
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pool&lt;/th&gt;
&lt;th&gt;VM Size&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Special Config&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;System&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Standard_D2s_v3&lt;/td&gt;
&lt;td&gt;Run operators, CoreDNS&lt;/td&gt;
&lt;td&gt;Tainted for critical add-ons only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;KubeVirt&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Standard_D4s_v3&lt;/td&gt;
&lt;td&gt;Run guest VMs&lt;/td&gt;
&lt;td&gt;Taint: &lt;code&gt;kubevirt.io/dedicated&lt;/code&gt;, Label: &lt;code&gt;workload=kubevirt&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;A word of caution:&lt;/strong&gt; The KubeVirt node pool &lt;em&gt;must&lt;/em&gt; use VM sizes that support nested virtualization. That means Dv3, Dv4, Dv5, Ev3, Ev4, or Ev5 series. Standard Bs or older Ds series won't work—I learned this the hard way.&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage Classes
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Class&lt;/th&gt;
&lt;th&gt;SKU&lt;/th&gt;
&lt;th&gt;Reclaim Policy&lt;/th&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kv-premium-retain&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Premium_LRS&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Retain&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Production VM disks (data survives VM deletion)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kv-standard&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;StandardSSD_LRS&lt;/td&gt;
&lt;td&gt;Delete&lt;/td&gt;
&lt;td&gt;Ephemeral and test VMs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  The Identity Challenge: Solving the IMDS Gap {#identity-challenge}
&lt;/h2&gt;

&lt;p&gt;This is where things get interesting—and where I spent most of my debugging time.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem
&lt;/h3&gt;

&lt;p&gt;Every Azure VM can reach the &lt;strong&gt;Instance Metadata Service (IMDS)&lt;/strong&gt; at &lt;code&gt;169.254.169.254&lt;/code&gt;. This service hands out managed identity tokens, instance metadata, and scheduled event notifications. Azure extensions like the AD SSH Login extension depend on it.&lt;/p&gt;

&lt;p&gt;But KubeVirt VMs are &lt;em&gt;nested&lt;/em&gt; inside an AKS node. When your guest VM tries to reach that link-local address, the request gets blocked by the pod's NAT layer.&lt;/p&gt;

&lt;p&gt;The result? Your nested VM has no Azure identity. Standard Azure extensions fail silently.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Solution: Azure Arc
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Azure Arc&lt;/strong&gt; lets you project non-Azure machines into Azure Resource Manager. That includes on-premise servers, VMs in other clouds, and—crucially for us—nested VMs that can't reach IMDS.&lt;/p&gt;

&lt;p&gt;With Arc, your KubeVirt VM gets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;An Azure Resource Identity&lt;/strong&gt; (a real resource ID in ARM)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Managed Identity Equivalent&lt;/strong&gt; for authenticating to Azure services&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extension Support&lt;/strong&gt; including the AADSSHLoginForLinux extension we need&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Registration Flow
&lt;/h3&gt;

&lt;p&gt;Here's how it comes together:&lt;/p&gt;

&lt;p&gt;The magic happens during cloud-init. The VM waits for network stability (KubeVirt NAT needs a moment), downloads the Arc agent, and registers itself using a service principal we created in Terraform. Once Arc confirms the connection, Terraform installs the SSH extension.&lt;/p&gt;

&lt;h3&gt;
  
  
  RBAC for SSH Access
&lt;/h3&gt;

&lt;p&gt;Access control uses standard Azure roles:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;th&gt;Permissions&lt;/th&gt;
&lt;th&gt;Assigned To&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Virtual Machine Administrator Login&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;SSH + sudo&lt;/td&gt;
&lt;td&gt;Faculty, IT Admins&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Virtual Machine User Login&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;SSH only (no sudo)&lt;/td&gt;
&lt;td&gt;Students&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Azure Connected Machine Onboarding&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Register new Arc machines&lt;/td&gt;
&lt;td&gt;Arc Service Principal&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Multi-Tenancy &amp;amp; Security Model {#multi-tenancy}
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Namespace-Based Isolation
&lt;/h3&gt;

&lt;p&gt;We use Kubernetes Namespaces as the primary isolation boundary. Each tenant gets their own namespace with dedicated quotas, network policies, and RBAC bindings.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Controls
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Control&lt;/th&gt;
&lt;th&gt;Implementation&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ResourceQuota&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Per-namespace CPU/Memory/PVC limits&lt;/td&gt;
&lt;td&gt;Prevent resource exhaustion&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;LimitRange&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Per-VM resource caps&lt;/td&gt;
&lt;td&gt;Stop one VM from eating all the quota&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;NetworkPolicy&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Ingress/Egress rules&lt;/td&gt;
&lt;td&gt;Network isolation between tenants&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;RBAC (K8s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;RoleBindings to Azure AD groups&lt;/td&gt;
&lt;td&gt;Control who can manage VMs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;RBAC (Azure)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;VM Login roles&lt;/td&gt;
&lt;td&gt;Control who can SSH into VMs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Node Taints&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;kubevirt.io/dedicated&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Keep VMs on dedicated nodes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;
  Example: Student Namespace Security Configuration
  &lt;br&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ResourceQuota&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;student-lab-quota&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;student-labs&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hard&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests.cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8"&lt;/span&gt;
    &lt;span class="na"&gt;requests.memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;16Gi&lt;/span&gt;
    &lt;span class="na"&gt;limits.cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;16"&lt;/span&gt;
    &lt;span class="na"&gt;limits.memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;32Gi&lt;/span&gt;
    &lt;span class="na"&gt;persistentvolumeclaims&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5"&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LimitRange&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;student-vm-limits&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;student-labs&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Container&lt;/span&gt;
      &lt;span class="na"&gt;max&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2"&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;4Gi&lt;/span&gt;
      &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2Gi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;/p&gt;




&lt;h2&gt;
  
  
  Implementation Deep Dive {#implementation}
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Terraform Structure
&lt;/h3&gt;

&lt;p&gt;The infrastructure breaks down into logical files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform/
├── main.tf              # AKS cluster and node pools
├── providers.tf         # Azure, Kubernetes, kubectl providers
├── variables.tf         # Input variables with validation
├── outputs.tf           # Connection strings and useful outputs
├── identity.tf          # Azure AD groups, RBAC assignments
├── arc.tf               # Azure Arc SP, roles, extension installer
├── platform.tf          # KubeVirt and CDI operator deployment
├── tenancy.tf           # Namespace, quota, network policy per tenant
├── storage.tf           # StorageClass definitions
├── networking.tf        # Egress network policies for operators
├── images.tf            # VM image storage (Azure Blob)
├── virtualmachines.tf   # Demo VM definition
└── templates/
    ├── cloud-init-arc.tftpl   # Cloud-init for Arc-enabled VMs
    └── cloud-init-lab.tftpl   # Cloud-init for basic VMs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Critical Piece: Cloud-Init
&lt;/h3&gt;

&lt;p&gt;The cloud-init script handles Arc registration and needs to be robust. It must deal with:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Network delays&lt;/strong&gt; while KubeVirt NAT stabilizes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DNS resolution&lt;/strong&gt; for Azure endpoints&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transient API failures&lt;/strong&gt; during registration&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;
  Key Cloud-Init Logic
  &lt;br&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wait_for_network&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;seq &lt;/span&gt;1 60&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
        if &lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;--connect-timeout&lt;/span&gt; 5 https://management.azure.com &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null 2&amp;gt;&amp;amp;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
            &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"[Arc] Network ready"&lt;/span&gt;
            &lt;span class="k"&gt;return &lt;/span&gt;0
        &lt;span class="k"&gt;fi
        &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"[Arc] Waiting for network... (&lt;/span&gt;&lt;span class="nv"&gt;$i&lt;/span&gt;&lt;span class="s2"&gt;/60)"&lt;/span&gt;
        &lt;span class="nb"&gt;sleep &lt;/span&gt;5
    &lt;span class="k"&gt;done
    return &lt;/span&gt;1
&lt;span class="o"&gt;}&lt;/span&gt;

register_with_arc&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;max_retries&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;retry_delay&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;30

    &lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;seq &lt;/span&gt;1 &lt;span class="nv"&gt;$max_retries&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
        if &lt;/span&gt;azcmagent connect &lt;span class="se"&gt;\&lt;/span&gt;
            &lt;span class="nt"&gt;--service-principal-id&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SP_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
            &lt;span class="nt"&gt;--service-principal-secret&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SP_SECRET&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
            &lt;span class="nt"&gt;--tenant-id&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TENANT_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
            &lt;span class="nt"&gt;--subscription-id&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SUB_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
            &lt;span class="nt"&gt;--resource-group&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$RG_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
            &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LOCATION&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
            &lt;span class="nt"&gt;--resource-name&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;hostname&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
            &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"[Arc] Registration successful"&lt;/span&gt;
            &lt;span class="k"&gt;return &lt;/span&gt;0
        &lt;span class="k"&gt;fi
        &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"[Arc] Retrying in &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;retry_delay&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;s... (&lt;/span&gt;&lt;span class="nv"&gt;$i&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$max_retries&lt;/span&gt;&lt;span class="s2"&gt;)"&lt;/span&gt;
        &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="nv"&gt;$retry_delay&lt;/span&gt;
        &lt;span class="nv"&gt;retry_delay&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((&lt;/span&gt;retry_delay &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="k"&gt;))&lt;/span&gt;
    &lt;span class="k"&gt;done
    return &lt;/span&gt;1
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform Patterns Worth Noting
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pattern&lt;/th&gt;
&lt;th&gt;Implementation&lt;/th&gt;
&lt;th&gt;Why It Matters&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Trigger-based Recreation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;triggers&lt;/code&gt; in &lt;code&gt;null_resource&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Recreate VM when cloud-init changes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Dependency Management&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Explicit &lt;code&gt;depends_on&lt;/code&gt; chains&lt;/td&gt;
&lt;td&gt;Correct deployment order&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Sensitive Values&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;sensitive = true&lt;/code&gt; on SP secrets&lt;/td&gt;
&lt;td&gt;Keep secrets out of logs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Deployment Guide {#deployment}
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What You'll Need
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Version&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Azure CLI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;2.40+&lt;/td&gt;
&lt;td&gt;Azure authentication and management&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Terraform&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1.3+&lt;/td&gt;
&lt;td&gt;Infrastructure provisioning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;kubectl&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1.24+&lt;/td&gt;
&lt;td&gt;Kubernetes interaction&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Azure Subscription&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;Owner role required for RBAC&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Step-by-Step
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clone and configure&lt;/span&gt;
git clone https://github.com/ykbytes/aks-kubevirt-arc-unilab.git
&lt;span class="nb"&gt;cd &lt;/span&gt;aks-kubevirt-arc-unilab
&lt;span class="nb"&gt;cp &lt;/span&gt;terraform.tfvars.example terraform.tfvars
&lt;span class="c"&gt;# Edit terraform.tfvars with your settings&lt;/span&gt;

&lt;span class="c"&gt;# Deploy (takes 15-20 minutes)&lt;/span&gt;
az login
terraform init
terraform plan
terraform apply

&lt;span class="c"&gt;# Get credentials and verify&lt;/span&gt;
az aks get-credentials &lt;span class="nt"&gt;--resource-group&lt;/span&gt; rg-uni-kubevirt &lt;span class="nt"&gt;--name&lt;/span&gt; aks-uni-platform
kubectl get kubevirt &lt;span class="nt"&gt;-n&lt;/span&gt; kubevirt      &lt;span class="c"&gt;# Should show: Deployed&lt;/span&gt;
kubectl get vm &lt;span class="nt"&gt;-n&lt;/span&gt; student-labs         &lt;span class="c"&gt;# Should show: lab-vm Running&lt;/span&gt;

&lt;span class="c"&gt;# Connect with Azure AD&lt;/span&gt;
az ssh vm &lt;span class="nt"&gt;--name&lt;/span&gt; lab-vm &lt;span class="nt"&gt;--resource-group&lt;/span&gt; rg-uni-kubevirt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;
  What to Expect During Deployment
  &lt;p&gt;Arc registration takes about 5-7 minutes. You'll see output like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;null_resource.arc_aad_ssh_extension[0] (local-exec): [Arc] Waiting for lab-vm to connect...
null_resource.arc_aad_ssh_extension[0] (local-exec): [Arc] Status:  (attempt 1/90)
...
null_resource.arc_aad_ssh_extension[0] (local-exec): [Arc] Status:  (attempt 27/90)
null_resource.arc_aad_ssh_extension[0] (local-exec): [Arc] Machine connected
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The empty status values in the first few minutes are normal—the VM is still booting and running cloud-init.&lt;/p&gt;



&lt;br&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Verification Commands
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check Arc registration&lt;/span&gt;
az connectedmachine show &lt;span class="nt"&gt;--name&lt;/span&gt; lab-vm &lt;span class="nt"&gt;--resource-group&lt;/span&gt; rg-uni-kubevirt &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"{Name:name, Status:status}"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; table

&lt;span class="c"&gt;# Check extension status&lt;/span&gt;
az connectedmachine extension list &lt;span class="nt"&gt;--machine-name&lt;/span&gt; lab-vm &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--resource-group&lt;/span&gt; rg-uni-kubevirt &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"[].{Name:name, Status:provisioningState}"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; table

&lt;span class="c"&gt;# Alternative: VM console access&lt;/span&gt;
kubectl virt console lab-vm &lt;span class="nt"&gt;-n&lt;/span&gt; student-labs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What Success Looks Like
&lt;/h3&gt;

&lt;p&gt;
  See a Successful Connection
  &lt;br&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;az ssh vm &lt;span class="nt"&gt;--name&lt;/span&gt; lab-vm &lt;span class="nt"&gt;--resource-group&lt;/span&gt; rg-uni-kubevirt

Welcome to Ubuntu 22.04.5 LTS

═══════════════════════════════════════════════════════════════
 KubeVirt Lab VM - Azure Arc Enabled
═══════════════════════════════════════════════════════════════

 Azure AD Authentication:
    az ssh vm &lt;span class="nt"&gt;--name&lt;/span&gt; lab-vm &lt;span class="nt"&gt;--resource-group&lt;/span&gt; rg-uni-kubevirt

 Required RBAC Roles:
    • Virtual Machine Administrator Login - &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;access
    • Virtual Machine User Login - &lt;span class="k"&gt;for &lt;/span&gt;standard user access

═══════════════════════════════════════════════════════════════

user@example.com@lab-vm:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;whoami
&lt;/span&gt;user@example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Notice that &lt;code&gt;whoami&lt;/code&gt; returns your Azure AD email, not a local username. No SSH keys were exchanged—Azure AD generated an ephemeral certificate automatically.&lt;/p&gt;



&lt;/p&gt;

&lt;p&gt;This is what makes the Arc approach worthwhile: a nested VM with no direct Azure identity becomes accessible via Azure AD credentials, just like a native Azure VM.&lt;/p&gt;




&lt;h2&gt;
  
  
  Technologies &amp;amp; Skills Demonstrated {#technologies}
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Cloud &amp;amp; Infrastructure
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;th&gt;Usage&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Azure Kubernetes Service (AKS)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Managed Kubernetes with workload identity&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Azure Arc&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Hybrid identity for nested VMs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Azure Blob Storage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;VM image repository&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Azure Managed Disks&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Persistent storage for VM disks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Azure RBAC&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Fine-grained SSH access control&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Kubernetes &amp;amp; Virtualization
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;th&gt;Usage&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;KubeVirt&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;VM orchestration on Kubernetes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CDI (Containerized Data Importer)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;VM disk image management&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Kubernetes RBAC&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Namespace-level access control&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;NetworkPolicies&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Tenant network isolation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ResourceQuotas &amp;amp; LimitRanges&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Multi-tenant resource governance&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  DevOps &amp;amp; Automation
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;th&gt;Usage&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Terraform&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Infrastructure as Code&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cloud-Init&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;VM bootstrap automation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Azure CLI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Scripted Azure operations&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  What This Project Demonstrates
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cloud Architecture:&lt;/strong&gt; A scalable, multi-tenant platform on Azure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes Depth:&lt;/strong&gt; KubeVirt, CDI, RBAC, NetworkPolicies working together&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Engineering:&lt;/strong&gt; Zero-trust identity with Azure Arc&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code:&lt;/strong&gt; Production-quality Terraform with proper patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Problem Solving:&lt;/strong&gt; A creative solution to the IMDS identity gap&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Potential Extensions
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Extension&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GitOps Integration&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Deploy VMs via ArgoCD or Flux&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GPU Passthrough&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Enable NVIDIA GPU for AI/ML workloads&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Live Migration&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Move VMs between nodes without downtime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Backup/DR&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Integrate Velero for VM backup&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cost Management&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Azure Cost Management tags and budgets&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  About the Author
&lt;/h2&gt;

&lt;p&gt;I'm a Cloud Platform Engineer focused on bridging legacy infrastructure and modern cloud-native operations. This project reflects my approach to real-world problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Designing complex cloud architectures that actually work&lt;/li&gt;
&lt;li&gt;Solving identity and security challenges without overengineering&lt;/li&gt;
&lt;li&gt;Writing Terraform that other people can maintain&lt;/li&gt;
&lt;li&gt;Automating the tedious parts so humans can focus on interesting problems&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>security</category>
      <category>azure</category>
    </item>
  </channel>
</rss>
