<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Rahimah Sulayman</title>
    <description>The latest articles on Forem by Rahimah Sulayman (@rahimah_dev).</description>
    <link>https://forem.com/rahimah_dev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/rahimah_dev"/>
    <language>en</language>
    <item>
      <title>My Kubernetes Mastery Journey: Installing Local Kubernetes Clusters</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Fri, 17 Apr 2026 21:37:51 +0000</pubDate>
      <link>https://forem.com/rahimah_dev/my-kubernetes-mastery-journey-installing-local-kubernetes-clusters-176</link>
      <guid>https://forem.com/rahimah_dev/my-kubernetes-mastery-journey-installing-local-kubernetes-clusters-176</guid>
      <description>&lt;p&gt;Now that we have familiarized ourselves with the default &lt;strong&gt;minikube start&lt;/strong&gt; command, let's dive deeper into Minikube to understand some of its more advanced features.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;minikube&lt;/code&gt; start by default selects a &lt;code&gt;driver&lt;/code&gt; isolation software, such as a hypervisor or a container runtime, &lt;em&gt;if one (VitualBox) or multiple are installed on the host workstation&lt;/em&gt;. In addition it downloads the latest Kubernetes version components. With the selected driver software it provisions a single &lt;strong&gt;VM&lt;/strong&gt; named &lt;code&gt;minikube&lt;/code&gt; (with hardware profile of CPUs=2, Memory=6GB, Disk=20GB) or container (Docker) to host the default single-node all-in-one Kubernetes cluster. Once the node is provisioned, it bootstraps the Kubernetes control plane (with the default &lt;code&gt;kubeadm&lt;/code&gt; tool), and it installs the latest version of the default container runtime, Docker, that will serve as a running environment for the containerized applications we will deploy to the Kubernetes cluster. &lt;br&gt;
The &lt;strong&gt;minikube start&lt;/strong&gt; command generates a default minikube cluster with the specifications described above and it will store these specs so that we can restart the default cluster whenever desired. The object that stores the specifications of our cluster is called a &lt;code&gt;profile&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;As Minikube matures, so do its features and capabilities. With the introduction of profiles, Minikube allows users to create custom reusable clusters that can all be managed from a single command line client.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;minikube profile&lt;/strong&gt; command allows us to view the status of all our clusters in a table formatted output. &lt;br&gt;
Now, we'll start the minikube indicating the driver, which is Docker in this case.&lt;br&gt;
Wait for it to finish! You'll see a message like "Done! kubectl is now configured." Once you see that, then you can try another Kubernetes command.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foy8cklosb5qyjul1shgh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foy8cklosb5qyjul1shgh.png" alt="mini3start" width="800" height="332"&gt;&lt;/a&gt;&lt;br&gt;
Now, we'll run the &lt;code&gt;minikube status&lt;/code&gt;. Once Minikube is "Running," you have a tiny one-node Kubernetes cluster alive on your machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6m2x209ctp8i2s9rby8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6m2x209ctp8i2s9rby8b.png" alt="status" width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you see a node named minikube with a status of &lt;strong&gt;Ready&lt;/strong&gt;, you officially have a Kubernetes cluster running on your laptop! We'll check by running the &lt;strong&gt;kubectl get nodes&lt;/strong&gt; command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqv587x2jss3skofbz1bb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqv587x2jss3skofbz1bb.png" alt="getnodes" width="749" height="117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;minikube stop&lt;/strong&gt;: With the this command, we can stop Minikube. This command stops all applications running in Minikube, safely stops the cluster and the VM, preserving our work until we decide to start the Minikube cluster once again, while preserving the Minikube VM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpvygy2lba8rqjkzg2t2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpvygy2lba8rqjkzg2t2.png" alt="stop" width="800" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;minikube status&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7f62nfjfceq7ac40u79.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7f62nfjfceq7ac40u79.png" alt="statusagain" width="796" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Assuming we have created only the default &lt;code&gt;minikube&lt;/code&gt; cluster, we could list the properties that define the default profile with:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhqleifz44l5srfyqts2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhqleifz44l5srfyqts2.png" alt="profilelist" width="800" height="149"&gt;&lt;/a&gt;&lt;br&gt;
This table presents the columns associated with the default properties such as the profile name: &lt;strong&gt;minikube&lt;/strong&gt;, the isolation driver: &lt;strong&gt;VirtualBox&lt;/strong&gt;, the container runtime: &lt;strong&gt;Docker&lt;/strong&gt;, the Kubernetes version: &lt;strong&gt;v1.28.3&lt;/strong&gt;, the status of the cluster - &lt;strong&gt;running or stopped&lt;/strong&gt;. The table also displays the &lt;strong&gt;number of nodes&lt;/strong&gt;: 1 by default, the &lt;strong&gt;private IP address&lt;/strong&gt; of the minikube cluster's control plane VirtualBox VM, and the &lt;strong&gt;secure port&lt;/strong&gt; that exposes the API Server to cluster control plane components, agents and clients: 8443. &lt;/p&gt;

&lt;p&gt;To create a brand-new cluster with 2 nodes named &lt;code&gt;lab-cluster&lt;/code&gt;, you use the --nodes flag. However, since you already have a single-node cluster stopped (as seen in your screenshot), we need to start it back up with the multi-node configuration. We'll run the multi-node command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1tx5q93zvz6088ltoqi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1tx5q93zvz6088ltoqi.png" alt="2nodes" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the cluster starts, we'll use the next three commands to see the difference: &lt;strong&gt;kubectlgetnodes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgngvrsdtgf0x2xyqavs2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgngvrsdtgf0x2xyqavs2.png" alt="getnodes" width="800" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;minikube profile list&lt;/strong&gt;: Using this command, you can check the Minikube Profiles and see both the original cluster and the new 2-node cluster side-by-side.&lt;br&gt;
The minikube profile list command shows the two separate &lt;code&gt;slots&lt;/code&gt; you have created on your machine:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;lab-cluster&lt;/code&gt;: This is the active cluster. It is running on the docker driver with 2 nodes and currently has a status of &lt;strong&gt;OK&lt;/strong&gt;. The asterisk (*) in the ACTIVE_PROFILE column indicates that any &lt;code&gt;kubectl&lt;/code&gt; commands ran right now will target this cluster.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;minikube&lt;/code&gt;: This is the original &lt;code&gt;single-node&lt;/code&gt; cluster. It is currently Stopped, meaning it isn't consuming any RAM or CPU, but its configuration and any data it had is are safely saved.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpe8mbywvguxru1olhrnl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpe8mbywvguxru1olhrnl.png" alt="pl" width="800" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;kubectl get nodes -o wide&lt;/strong&gt;: This gives a detailed View: you can see which node is the &lt;code&gt;Control Plane&lt;/code&gt; (the brain) and which is the &lt;code&gt;Worker&lt;/code&gt; (the muscle).&lt;br&gt;
This command shows the details of the nodes inside your active lab-cluster profile:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;lab-cluster&lt;/code&gt;(control-plane): This is the &lt;code&gt;brain&lt;/code&gt; of your cluster. It manages the state, schedules applications, and handles the API.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;lab-cluster-m02&lt;/code&gt;: This is your second node. In a multi-node setup, this acts as a Worker node where your actual application containers (Pods) will run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ready Status&lt;/strong&gt;: Both nodes are Ready, meaning they are healthy and communicating with each other.&lt;br&gt;
And the &lt;strong&gt;-o wide&lt;/strong&gt; flag gives you deeper technical insights:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Internal-IP&lt;/strong&gt;: Your nodes have unique internal addresses (192.168.58.2 and 192.168.58.3) to talk to each other.&lt;br&gt;
&lt;strong&gt;OS-Image&lt;/strong&gt;: They are running Debian GNU/Linux 12 inside their Docker containers.&lt;br&gt;
&lt;strong&gt;Container-Runtime&lt;/strong&gt;: They are using Docker v1.35.1 to actually spin up the containers.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgr1q9mm386dkzgzhrfz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgr1q9mm386dkzgzhrfz.png" alt="getsnodewide" width="800" height="48"&gt;&lt;/a&gt;&lt;br&gt;
The role of the second cluster is labeled as &lt;code&gt;none&lt;/code&gt; because it wasn't specified during creation.&lt;br&gt;
Now we'll stop the &lt;code&gt;lab-cluster&lt;/code&gt; and start the &lt;code&gt;minikube&lt;/code&gt;.&lt;br&gt;
This is known as &lt;strong&gt;Switching Contexts&lt;/strong&gt; and should be mastered.&lt;br&gt;
If you want to go back to your first cluster, you don't need to delete anything. You just switch the &lt;code&gt;Active&lt;/code&gt; pointer:&lt;/p&gt;

&lt;p&gt;Stop current: &lt;strong&gt;minikube stop -p lab-cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxmvw3h6a74bygkounxi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxmvw3h6a74bygkounxi.png" alt="stoplabcl" width="800" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Switch &amp;amp; Start: &lt;strong&gt;minikube start -p minikube&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqkgbj9di9y85en119da.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqkgbj9di9y85en119da.png" alt="startmin" width="800" height="190"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced Minikube features
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;minikube profile list&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxabm71iajfo6hrkiyq0y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxabm71iajfo6hrkiyq0y.png" alt="minikubeprofilelist" width="800" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to set the profile to &lt;code&gt;lab-cluster&lt;/code&gt;, we' ll use the command: &lt;strong&gt;profile lab-cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7esor478x23cvvy0sjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7esor478x23cvvy0sjc.png" alt="settolabcluster" width="800" height="68"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then start the minikube again using &lt;strong&gt;minikube start&lt;/strong&gt;&lt;br&gt;
When it is time to run the cluster again, simply run the &lt;strong&gt;minikube start&lt;/strong&gt; command (driver option is not required), and it will restart the earlier bootstrapped Minikube cluster.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frdz7xt94m2w6m2tee0sa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frdz7xt94m2w6m2tee0sa.png" alt="mini3start" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now I want the terminal to look organized and show &lt;code&gt;worker&lt;/code&gt;, I can manually assign the role using the label command. Run this in your VS Code terminal:&lt;/p&gt;

&lt;p&gt;then change context to &lt;code&gt;lab-cluster&lt;/code&gt;(to make the target cluster &lt;code&gt;lab-cluster&lt;/code&gt;) and then run &lt;code&gt;get nodes&lt;/code&gt; command.&lt;br&gt;
 is now &lt;code&gt;worker!&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyomhgt8rwnllnd8gdwye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyomhgt8rwnllnd8gdwye.png" alt="getnodes" width="800" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run &lt;strong&gt;minikube profile list&lt;/strong&gt; command, the profile will now be set to the &lt;code&gt;lab-cluster&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bysprkthvnfdh3juvsn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bysprkthvnfdh3juvsn.png" alt="profilelist" width="800" height="166"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run &lt;strong&gt;kubectl config view&lt;/strong&gt;, it gives a detailed information about the cluster and its nodes.&lt;br&gt;
The &lt;code&gt;kubeconfig&lt;/code&gt; includes the API Server's endpoint server: ht‌t‌ps://127.0.0.1:49687 and the minikube user's client authentication key and certificate data.&lt;/p&gt;

&lt;p&gt;With the &lt;code&gt;kubectl&lt;/code&gt; is installed, we can display information about the Minikube Kubernetes cluster with the kubectl cluster-info command:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F92y5wcga9as8ko14i7cn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F92y5wcga9as8ko14i7cn.png" alt="view" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the &lt;strong&gt;cluster info&lt;/strong&gt; command, this gives information about the IP address the cluster is running at.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0yqaitr2w2hnb7gf8nr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0yqaitr2w2hnb7gf8nr.png" alt="kubectl" width="800" height="66"&gt;&lt;/a&gt;&lt;br&gt;
&lt;code&gt;Kubernetes master&lt;/code&gt; is running at htt‌‌ps://127.0.0.1:49687&lt;br&gt;
KubeDNS is running at htt‌‌ps://127.0.0.1:49687/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy&lt;/p&gt;

&lt;h2&gt;
  
  
  The Kubernetes Dashboard
&lt;/h2&gt;

&lt;p&gt;The Kubernetes Dashboard provides a &lt;strong&gt;web-based user interface&lt;/strong&gt; for Kubernetes cluster management. &lt;code&gt;Minikube&lt;/code&gt; installs the Dashboard as an &lt;code&gt;addon&lt;/code&gt;, but it is disabled by default. Prior to using the Dashboard we are required to enable the Dashboard &lt;code&gt;addon&lt;/code&gt;, together with the &lt;code&gt;metrics-server addon&lt;/code&gt;, a helper addon designed to collect usage metrics from the Kubernetes cluster. To access the dashboard from &lt;code&gt;Minikube&lt;/code&gt;, we can use the minikube dashboard command, which opens a new tab in our web browser displaying the Kubernetes Dashboard, but only after we list, enable required addons, and verify their state:&lt;/p&gt;

&lt;p&gt;$ minikube addons list&lt;/p&gt;

&lt;p&gt;$ minikube addons enable metrics-server&lt;/p&gt;

&lt;p&gt;$ minikube addons enable dashboard&lt;/p&gt;

&lt;p&gt;$ minikube addons list&lt;/p&gt;

&lt;p&gt;$ minikube dashboard &lt;/p&gt;

&lt;p&gt;Run &lt;strong&gt;minikube addons list&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvrf6x4ye4st0gimo9g4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvrf6x4ye4st0gimo9g4.png" alt="addonlist" width="800" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to enable metrics-server addon, run &lt;strong&gt;minikube addons enable metrics-server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkwsqcsz8zbdsfbjx2po.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkwsqcsz8zbdsfbjx2po.png" alt="enableser" width="800" height="123"&gt;&lt;/a&gt;&lt;br&gt;
Verify that the metrics-server is now enabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pb671f2p7fnfvvaynoc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pb671f2p7fnfvvaynoc.png" alt="enabled" width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run &lt;strong&gt;minikube addons enable dashboard&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famg2mifzt9tredzxyxip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famg2mifzt9tredzxyxip.png" alt="dashboard" width="800" height="167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Verify that dashboard enabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwou7i642v774f9z3xei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwou7i642v774f9z3xei.png" alt="dashenabled" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the &lt;strong&gt;minikube dashboard&lt;/strong&gt; command, and a url is displayed which opens a new window when clicked.&lt;br&gt;
Or you can simply run &lt;strong&gt;minikube dashboard --url&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqz115e8b4mvvcop8dqzp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqz115e8b4mvvcop8dqzp.png" alt="dashhttp" width="800" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The dashboard is empty as expected.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffv336omcnq6w2wejal0i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffv336omcnq6w2wejal0i.png" alt="nothing to view" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So we'll create one pod using this command: &lt;strong&gt;kubectl run my-first-pod --image=nginx&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m53swtp6vh3lvyusa9b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m53swtp6vh3lvyusa9b.png" alt="onepodcommand" width="800" height="133"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Verify it's now displayed on the dashboard&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgi7whzqb3c894shje98.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgi7whzqb3c894shje98.png" alt="1pod" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the &lt;code&gt;logs&lt;/code&gt; command for my-first-pod&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffy3ndvebed76xcuew917.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffy3ndvebed76xcuew917.png" alt="logs" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;log can also be performed on the dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnezgc6w1hbsgq33sa0n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnezgc6w1hbsgq33sa0n.png" alt="fromdash" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;APIs with kubectl proxy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Issuing the &lt;code&gt;kubectl&lt;/code&gt; proxy command, kubectl authenticates with the API server on the control plane node and makes services available on the default proxy port 8001.&lt;/p&gt;

&lt;p&gt;First, we issue the &lt;code&gt;kubectl&lt;/code&gt; proxy command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl proxy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Starting to serve on 127.0.0.1:8001&lt;/p&gt;

&lt;p&gt;It locks the terminal for as long as the proxy is running, unless we run it in the background (with kubectl proxy &amp;amp;).&lt;/p&gt;

&lt;p&gt;When kubectl proxy is running, we can send requests to the API over the localhost on the default proxy port 8001 (from another terminal, since the proxy locks the first terminal when running in foreground):&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ curl &lt;a href="http://localhost:8001/" rel="noopener noreferrer"&gt;http://localhost:8001/&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffh20r4x5weqli3wqkb4q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffh20r4x5weqli3wqkb4q.png" alt="curl failed" width="800" height="108"&gt;&lt;/a&gt;&lt;br&gt;
But it worked on the browser:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tmp4vdyxw0rfxkfuurk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tmp4vdyxw0rfxkfuurk.png" alt=":8001 on browser" width="800" height="662"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So we'll use another terminal because the proxy is now locked in the first terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq7salavv5qtgrk90bjg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq7salavv5qtgrk90bjg.png" alt="didnotfail" width="800" height="425"&gt;&lt;/a&gt;&lt;br&gt;
This works!&lt;/p&gt;

&lt;p&gt;I stopped the clusters from dashboard and verified using the command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fphcc5hb5zl04bwb1i8lh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fphcc5hb5zl04bwb1i8lh.png" alt="veriyfromcmd" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Mastering &lt;code&gt;Minikube&lt;/code&gt; is about more than just starting a cluster, it’s about creating a reliable, reproducible environment that mirrors the complexities of the cloud. By moving beyond default settings and embracing multi-node profiles, you transition from a student of Kubernetes to an engineer capable of architecting resilient systems.&lt;/p&gt;

&lt;p&gt;As you continue building, remember that a well-organized local environment is the foundation of a successful deployment pipeline. Whether you are assigning worker roles in the CLI or monitoring pod health on the dashboard, these skills ensure that your infrastructure is as robust as the code running on it.&lt;/p&gt;

&lt;p&gt;Happy &lt;em&gt;Kube-ing!&lt;/em&gt; Post by rahimah_dev&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>minikube</category>
      <category>infrastructure</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Mastering Azure Monitor: Deployment and Configuration</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Thu, 16 Apr 2026 19:36:28 +0000</pubDate>
      <link>https://forem.com/rahimah_dev/mastering-azure-monitor-deployment-and-configuration-45nl</link>
      <guid>https://forem.com/rahimah_dev/mastering-azure-monitor-deployment-and-configuration-45nl</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;While working on a comprehensive deployment of Azure Monitor, I hit a common but frustrating wall: the dreaded &lt;code&gt;SubscriptionIsOverQuotaForSku&lt;/code&gt; error. Instead of stopping, I pivoted, re-engineering my deployment across Korea Central and East US to maintain uptime and visibility(since we're in a learning environment).&lt;/p&gt;

&lt;p&gt;In this post, I’ll walk you through how I deployed a hybrid environment featuring &lt;em&gt;Windows Server&lt;/em&gt; (IIS), &lt;em&gt;Linux&lt;/em&gt; (Ubuntu), and a &lt;em&gt;SQL-backed Web App&lt;/em&gt;, all while configuring the &lt;strong&gt;observability&lt;/strong&gt; layers needed to keep a modern enterprise running. &lt;/p&gt;

&lt;p&gt;Here is one of the scenarios that would urgently require one of the tasks below: An insurance firm just suffered a minor &lt;code&gt;brute-force&lt;/code&gt; attack because a junior dev left a virtual machine open to the entire internet. The CTO orders an immediate &lt;code&gt;lockdown&lt;/code&gt; of all infrastructure.&lt;/p&gt;

&lt;p&gt;My Task: I changed the &lt;strong&gt;RDP&lt;/strong&gt; Source to &lt;strong&gt;My IP&lt;/strong&gt; and manually configured Inbound Security Rules for HTTP (Port 80).&lt;/p&gt;

&lt;p&gt;The necessity: This is a critical security task. By restricting RDP access to only my specific IP address, I effectively &lt;em&gt;closed the front door&lt;/em&gt; to hackers. &lt;/p&gt;

&lt;p&gt;This exercise should take approximately &lt;strong&gt;30&lt;/strong&gt; minutes to complete.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prepare your bring-your-own-subscription (BYOS)
&lt;/h2&gt;

&lt;p&gt;This set of lab exercises assumes that you have global administrator permissions to an Azure subscription.&lt;/p&gt;

&lt;p&gt;1.In the Azure Portal Search Bar, enter &lt;strong&gt;Resource Groups&lt;/strong&gt; and select &lt;strong&gt;Resource groups&lt;/strong&gt; from the list of results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf9x23gnt2qb6vs3cj6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf9x23gnt2qb6vs3cj6u.png" alt="rg" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.On the &lt;strong&gt;Resource Groups&lt;/strong&gt; page, select &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5binfzzx6sbc9xo5ypxp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5binfzzx6sbc9xo5ypxp.png" alt="create" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.On the &lt;strong&gt;Create a Resource Group&lt;/strong&gt; page, select your subscription and enter the name &lt;code&gt;rg-alpha&lt;/code&gt;. Set the region to East US, choose &lt;strong&gt;Review + Create&lt;/strong&gt;, and then choose &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcp9n4fg94cx4vms0fvpy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcp9n4fg94cx4vms0fvpy.png" alt="name" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzjjifuxd0g6bg8dke49.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzjjifuxd0g6bg8dke49.png" alt="create" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;NOTE: This set of exercises assumes that you choose to deploy in the East US Region, but you can change this to another region if you choose. Just remember that each time you see East US mentioned in these instructions you will need to substitute the region you have chosen&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create App Log Examiners security group
&lt;/h2&gt;

&lt;p&gt;In this exercise, you create an &lt;code&gt;Entra&lt;/code&gt; ID security group.&lt;/p&gt;

&lt;p&gt;1.In the Azure Portal Search Bar, enter Azure Active Directory (or Entra ID) from the list of results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzeu8stl66nkblpmpsvef.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzeu8stl66nkblpmpsvef.png" alt="Entra" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.On the &lt;strong&gt;Default Directory&lt;/strong&gt; page, select, &lt;strong&gt;+ Add&lt;/strong&gt;, then &lt;strong&gt;Groups&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxbntzjwqfwmqn5v5mrx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxbntzjwqfwmqn5v5mrx.png" alt="grps" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.On the &lt;strong&gt;New Group&lt;/strong&gt; page, provide the values in the following table and choose &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Group type&lt;/td&gt;
&lt;td&gt;Security&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Group name&lt;/td&gt;
&lt;td&gt;App Log Examiners&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Group description&lt;/td&gt;
&lt;td&gt;App Log Examiners&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faafuialpwngvurakf7oy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faafuialpwngvurakf7oy.png" alt="grps" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy and configure WS-VM1
&lt;/h2&gt;

&lt;p&gt;In this exercise, you deploy and configure a Windows Server virtual machine.&lt;/p&gt;

&lt;p&gt;1.In the Azure Portal Search Bar, enter &lt;strong&gt;Virtual Machines&lt;/strong&gt; and select &lt;strong&gt;Virtual Machines&lt;/strong&gt; from the list of results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww0h86oi7axfzzd2f1cn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww0h86oi7axfzzd2f1cn.png" alt="vm" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.On the &lt;strong&gt;Virtual Machines&lt;/strong&gt; page, choose &lt;strong&gt;Create&lt;/strong&gt; and select &lt;strong&gt;Azure Virtual Machine&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6gg6zr8l14tvxrbdmrv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6gg6zr8l14tvxrbdmrv.png" alt="azurevm" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.On the &lt;strong&gt;Basics&lt;/strong&gt; page of the Create A Virtual Machine wizard, select the following settings and then choose &lt;strong&gt;Review + Create&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Subscription&lt;/td&gt;
&lt;td&gt;Your subscription&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resource Group&lt;/td&gt;
&lt;td&gt;rg-alpha&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Virtual machine name&lt;/td&gt;
&lt;td&gt;WS-VM1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Region&lt;/td&gt;
&lt;td&gt;East US&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Availability options&lt;/td&gt;
&lt;td&gt;No infrastructure redundancy required&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Security type&lt;/td&gt;
&lt;td&gt;Standard&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image&lt;/td&gt;
&lt;td&gt;Windows Server 2022 Datacenter: Azure Edition – x64 Gen2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VM architecture&lt;/td&gt;
&lt;td&gt;x64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Size&lt;/td&gt;
&lt;td&gt;Standard_D4s_v3 – 4 vcpus, 16 GiB memory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Administrator account&lt;/td&gt;
&lt;td&gt;prime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Password&lt;/td&gt;
&lt;td&gt;[Select a unique secure password] P@ssw0rdP@ssw0rd&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Inbound ports&lt;/td&gt;
&lt;td&gt;RDP 3389&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbnjb4e06bh0b2brx4mz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbnjb4e06bh0b2brx4mz.png" alt="create" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdk1jgzx0l1jphhat7q9m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdk1jgzx0l1jphhat7q9m.png" alt="create" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8hu77vvfn7vindwxaa6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8hu77vvfn7vindwxaa6.png" alt="create" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.Review the settings and select &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06blxoig8d13pway0q0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06blxoig8d13pway0q0r.png" alt="create" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.Wait for the deployment to complete. Once deployment completes choose &lt;strong&gt;Go to resource&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8s5v6s48aqdykxfca3xl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8s5v6s48aqdykxfca3xl.png" alt="gtr" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.On the &lt;strong&gt;WS-VM1 properties&lt;/strong&gt; page, choose &lt;strong&gt;Networking&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqmbgyjfv2i6hieu4jyo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqmbgyjfv2i6hieu4jyo.png" alt="ntwk" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;7.On the &lt;strong&gt;Networking&lt;/strong&gt; page, select the &lt;code&gt;RDP&lt;/code&gt; rule.&lt;/p&gt;

&lt;p&gt;8.On the RDP rule space, change the Source to My IP address and choose &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuj2ugsdnshrvy7zskpz3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuj2ugsdnshrvy7zskpz3.png" alt="rdp" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This restricts incoming RDP connections to the IP address you’re currently using.&lt;br&gt;
9.On the &lt;strong&gt;Networking&lt;/strong&gt; page, choose &lt;strong&gt;Add inbound port rule&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9jc58rwndrnefahhrgz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9jc58rwndrnefahhrgz.png" alt="portrules" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkc93rgn5ytbgsm811xj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkc93rgn5ytbgsm811xj.png" alt="inboundrules" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;10.On the &lt;strong&gt;Add inbound security rule&lt;/strong&gt; page, configure the following settings and choose &lt;strong&gt;Add&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Source&lt;/td&gt;
&lt;td&gt;Any&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Source port ranges&lt;/td&gt;
&lt;td&gt;*&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Destination&lt;/td&gt;
&lt;td&gt;Any&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Service&lt;/td&gt;
&lt;td&gt;HTTP&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Action&lt;/td&gt;
&lt;td&gt;Allow&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Priority&lt;/td&gt;
&lt;td&gt;310&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Name&lt;/td&gt;
&lt;td&gt;AllowAnyHTTPInbound&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqn6t9v5f9a5cutskswc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqn6t9v5f9a5cutskswc.png" alt="rules" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;11.On the &lt;strong&gt;WS-VM1&lt;/strong&gt; page, choose &lt;strong&gt;Connect&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h47o8t2gznzzsh4s6o0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h47o8t2gznzzsh4s6o0.png" alt="connect" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;12.Under Native RDP, choose &lt;strong&gt;Select&lt;/strong&gt;.&lt;br&gt;
13.On the &lt;strong&gt;Native RDP&lt;/strong&gt; page, choose &lt;strong&gt;Download RDP file&lt;/strong&gt; and then open the file. Opening the RDP file opens the Remote Desktop Connection dialog box.&lt;/p&gt;

&lt;p&gt;14.On the &lt;strong&gt;Windows Security&lt;/strong&gt; dialog box, choose &lt;strong&gt;More Choices&lt;/strong&gt; and then choose Use a different account.&lt;/p&gt;

&lt;p&gt;15.Enter the username as .\prime and the password as the secure password you chose in Step 3, and choose &lt;strong&gt;OK&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehswqzxxhm70r1rtrjob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehswqzxxhm70r1rtrjob.png" alt="securitybox" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;16.When signed into the Windows Server virtual machine, right-click on the &lt;strong&gt;Start&lt;/strong&gt; hint and then choose &lt;strong&gt;Windows PowerShell (Admin)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;17.At the elevated command prompt, type the following command and press &lt;strong&gt;Enter&lt;/strong&gt;. Install-WindowsFeature Web-Server -IncludeAllSubFeature -IncludeManagementTools&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfylj0v7ckfc91xbu6it.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfylj0v7ckfc91xbu6it.png" alt="Installing" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;18.When the installation completes run the following command to change to the web server root directory. cd c:\inetpub\wwwroot\&lt;br&gt;
19.Run the following command. Wget &lt;a href="https://raw.githubusercontent.com/Azure-Samples/html-docs-hello-world/master/index.html" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/Azure-Samples/html-docs-hello-world/master/index.html&lt;/a&gt; -OutFile index.html&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4vvc0a6yoqf6gi0kxyz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4vvc0a6yoqf6gi0kxyz.png" alt="complete" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy and configure LX-VM2
&lt;/h2&gt;

&lt;p&gt;In this exercise you deploy and configure a Linux virtual machine.&lt;/p&gt;

&lt;p&gt;1.In the Azure Portal Search Bar, enter &lt;strong&gt;Virtual Machines&lt;/strong&gt; and select &lt;strong&gt;Virtual Machines&lt;/strong&gt; from the list of results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbinra28zxr2wynji4qk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbinra28zxr2wynji4qk.png" alt="createvm" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.On the &lt;strong&gt;Virtual Machines&lt;/strong&gt; page, choose &lt;strong&gt;Create&lt;/strong&gt; and select &lt;strong&gt;Azure Virtual Machine&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cyo2pct3bd9dct6g878.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cyo2pct3bd9dct6g878.png" alt="vmcreate" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.On the &lt;strong&gt;Basics&lt;/strong&gt; page of the Create A Virtual Machine wizard, select the following settings and then choose &lt;strong&gt;Review + Create&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Subscription&lt;/td&gt;
&lt;td&gt;Your subscription&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resource Group&lt;/td&gt;
&lt;td&gt;rg-alpha&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Virtual machine name&lt;/td&gt;
&lt;td&gt;Linux-VM2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Region&lt;/td&gt;
&lt;td&gt;East US&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Availability options&lt;/td&gt;
&lt;td&gt;No infrastructure redundancy required&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Security type&lt;/td&gt;
&lt;td&gt;Standard&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image&lt;/td&gt;
&lt;td&gt;Ubuntu Server 20.04 LTS – x64 Gen2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VM architecture&lt;/td&gt;
&lt;td&gt;x64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Size&lt;/td&gt;
&lt;td&gt;Standard_D2s_v3 – 2 vcpus, 8 GiB memory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Authentication type&lt;/td&gt;
&lt;td&gt;Password&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Username&lt;/td&gt;
&lt;td&gt;Prime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Password&lt;/td&gt;
&lt;td&gt;[Select a unique secure password] P@ssw0rdP@ssw0rd&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Public inbound ports&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsnjl7vyffzwyit4w1py6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsnjl7vyffzwyit4w1py6.png" alt="vm" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxmklw6swvr2kunbatpi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxmklw6swvr2kunbatpi.png" alt="vm" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.Review the information and choose &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z1ubdj80epwr3ctm9pc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z1ubdj80epwr3ctm9pc.png" alt="review" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.After the VM deploys, open the &lt;strong&gt;VM properties&lt;/strong&gt; page and choose &lt;strong&gt;Extensions + Applications **under **Settings&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxh49kqiwonnrtt6czm3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxh49kqiwonnrtt6czm3.png" alt="xtensn" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.Choose &lt;strong&gt;Add&lt;/strong&gt; and select the &lt;strong&gt;Network Watcher Agent for Linux&lt;/strong&gt;. Choose &lt;strong&gt;Next&lt;/strong&gt; and then choose &lt;strong&gt;Review and Create&lt;/strong&gt;. Choose &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdodt1quf5191htzmhcst.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdodt1quf5191htzmhcst.png" alt="nightwatcher" width="800" height="755"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qi2ml8pfv4giwk9l7qd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qi2ml8pfv4giwk9l7qd.png" alt="create" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;NOTE: The installation and configuration of the OmsAgentForLinux extension will be performed in Exercise 2 after the Log Analytics workspace is created&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy a web app with an SQL Database
&lt;/h2&gt;

&lt;p&gt;1.Ensure that you’re signed into the Azure Portal.&lt;br&gt;
2.In your browser, open a new browser tab and navigate to &lt;a href="https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-sql-database" rel="noopener noreferrer"&gt;https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-sql-database&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.On the GitHub page, choose &lt;strong&gt;Deploy to Azure&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kz7z0d3p6z6zlgnqrec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kz7z0d3p6z6zlgnqrec.png" alt="github" width="800" height="404"&gt;&lt;/a&gt;&lt;br&gt;
4.A new tab opens. If necessary, re-sign into Azure with the account that has Global Administrator privileges.&lt;br&gt;
5.On the &lt;strong&gt;Basics&lt;/strong&gt; page, select &lt;strong&gt;Edit template&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0n5dkswppph6c0dhaj96.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0n5dkswppph6c0dhaj96.png" alt="edittemplate" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.In the template editor, delete the contents of lines 158 to 174 inclusive and delete the “,” on line 157. Choose &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwp1wqeidifryfici56na.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwp1wqeidifryfici56na.png" alt="deletelines" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;7.On the &lt;strong&gt;Basics&lt;/strong&gt; page, provide the following information and choose &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Subscription&lt;/td&gt;
&lt;td&gt;Your subscription&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resource Group&lt;/td&gt;
&lt;td&gt;rg-alpha&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Region&lt;/td&gt;
&lt;td&gt;East US&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sku Name&lt;/td&gt;
&lt;td&gt;F1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sku Capacity&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sql Administrator Login&lt;/td&gt;
&lt;td&gt;prime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sql Administrator Login Password&lt;/td&gt;
&lt;td&gt;[Select a unique secure password] P@ssw0rdP@ssw0rd&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqkdzcxaf3xxhov8l7yb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqkdzcxaf3xxhov8l7yb.png" alt="create" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmwpolareylqbeh4wqzd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmwpolareylqbeh4wqzd.png" alt="error" width="800" height="382"&gt;&lt;/a&gt;&lt;br&gt;
Quota limits are often region-specific. If you are hitting a zero-limit in one location, another might have availability.&lt;br&gt;
Go back to the &lt;strong&gt;Basics&lt;/strong&gt; tab and try switching the Region to a major hub like East US, West US 2, or North Europe.&lt;/p&gt;

&lt;p&gt;In my specific case, Korea Central is often a good alternative when am faced subscription roadblocks in other regions recently.&lt;br&gt;
Hence, I changed the resource group to &lt;code&gt;rg-alpha2&lt;/code&gt; and Region to &lt;code&gt;Korea Central&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;NOTE: If you absolutely need that specific region and size, you have to ask Microsoft to "unlock" it for you&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyierzmc315x95y5tt058.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyierzmc315x95y5tt058.png" alt="change" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;8.Review the information presented and select &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekino22uv804ty0fi5wx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekino22uv804ty0fi5wx.png" alt="create" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;9.After the deployment completes, choose &lt;strong&gt;Go to resource group&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz88h2pvbej24lqn3y4vl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz88h2pvbej24lqn3y4vl.png" alt="gtd" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7e1vqrzwuqau6o74ekmb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7e1vqrzwuqau6o74ekmb.png" alt="overview" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy a Linux web app
&lt;/h2&gt;

&lt;p&gt;1.Ensure that you’re signed into the Azure Portal.&lt;br&gt;
2.In your browser, open a new browser tab and navigate to &lt;a href="https://learn.microsoft.com/en-us/samples/azure/azure-quickstart-templates/webapp-basic-linux/" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/samples/azure/azure-quickstart-templates/webapp-basic-linux/&lt;/a&gt;&lt;br&gt;
3.On the GitHub page, choose &lt;strong&gt;Deploy to Azure&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3m8v9y1ohvcvkr2lv2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3m8v9y1ohvcvkr2lv2a.png" alt="deploytoazure" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.On the &lt;strong&gt;Basics&lt;/strong&gt; page, provide the following information and choose &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Subscription&lt;/td&gt;
&lt;td&gt;Your subscription&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resource Group&lt;/td&gt;
&lt;td&gt;rg-alpha&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Region&lt;/td&gt;
&lt;td&gt;East US&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Web app Name&lt;/td&gt;
&lt;td&gt;AzureLinuxAppWXYZ (assign a random number to the final four characters of the name)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sku&lt;/td&gt;
&lt;td&gt;S1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Linux Fx version&lt;/td&gt;
&lt;td&gt;php                                          7.4&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3od68147m5u002qc5urh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3od68147m5u002qc5urh.png" alt="details" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.Review the information and choose &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj51oaqqpdnxcfz9ekn01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj51oaqqpdnxcfz9ekn01.png" alt="error" width="800" height="385"&gt;&lt;/a&gt;&lt;br&gt;
So I changed the resource group and Region again.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam57vvfehduj826idf1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam57vvfehduj826idf1d.png" alt="change" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqe50mm8m52o9ab68rl9q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqe50mm8m52o9ab68rl9q.png" alt="create" width="800" height="497"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Deployment failed&lt;/strong&gt; because the webApp name already exists, click on &lt;strong&gt;Redeploy&lt;/strong&gt; and edit the name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnpqqzd1mz1fcy641sbk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnpqqzd1mz1fcy641sbk.png" alt="failed to deploy" width="800" height="440"&gt;&lt;/a&gt;&lt;br&gt;
6.Now that Deployment is complete, &lt;strong&gt;Go To Resource&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwf7vi997sto7d9o0rapx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwf7vi997sto7d9o0rapx.png" alt="resource" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The "Invisible Skill of Cloud Engineering&lt;/strong&gt;&lt;br&gt;
Setting up a multi-tier environment on Azure is more than a checklist of installations, it is a lesson in resiliency. This project challenged me to manage a &lt;strong&gt;hybrid&lt;/strong&gt; ecosystem—bridging Windows Server management, Linux observability, and SQL-backed web applications, all while navigating real-world infrastructure constraints like subscription quotas and regional limitations.&lt;/p&gt;

</description>
      <category>powershell</category>
      <category>devops</category>
      <category>linux</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>KUBERNETES - Deploying a Standalone Application 1</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Sun, 05 Apr 2026 16:38:43 +0000</pubDate>
      <link>https://forem.com/rahimah_dev/deploying-a-standalone-application-1-le0</link>
      <guid>https://forem.com/rahimah_dev/deploying-a-standalone-application-1-le0</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes&lt;/strong&gt; has a reputation for being a &lt;strong&gt;wall of YAML&lt;/strong&gt;, but it doesn't have to start that way. If you’re looking for a visual, hands-on way to understand how &lt;em&gt;Pods, Deployments, and Services&lt;/em&gt; actually interact, you’re in the right place. Today, we’re firing up the &lt;code&gt;Minikube Dashboard&lt;/code&gt; to deploy a &lt;strong&gt;standalone web server&lt;/strong&gt; with just a few clicks. By the end of this post, you won't just have an &lt;code&gt;Nginx server&lt;/code&gt; running, you'll understand the &lt;code&gt;labels&lt;/code&gt; and &lt;code&gt;selectors&lt;/code&gt; that hold the entire K8s ecosystem together.&lt;/p&gt;

&lt;p&gt;Learning Objectives&lt;/p&gt;

&lt;p&gt;By the end of this series, you should be able to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy an application from the dashboard.&lt;/li&gt;
&lt;li&gt;Deploy an application from a &lt;code&gt;YAML&lt;/code&gt; file using &lt;code&gt;kubectl&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Expose a service using &lt;code&gt;NodePort&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Access the application from outside the &lt;code&gt;Minikube&lt;/code&gt; cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deploying an Application Using the Dashboard (1)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's learn how to deploy an &lt;code&gt;nginx webserver&lt;/code&gt; using the nginx container image from Docker Hub.&lt;/p&gt;

&lt;p&gt;Start &lt;code&gt;Minikube&lt;/code&gt; and verify that it is running. Run this command first:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ minikube start&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftinwuuw0lqdld8a9h3xu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftinwuuw0lqdld8a9h3xu.png" alt="ministart" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then verify &lt;strong&gt;Minikube&lt;/strong&gt; status:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ minikube status&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fungyk6pz99tm8sf7wn0a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fungyk6pz99tm8sf7wn0a.png" alt="ministatus" width="800" height="149"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Start the &lt;strong&gt;Minikube&lt;/strong&gt; Dashboard. To access the Kubernetes &lt;strong&gt;Web IU&lt;/strong&gt;, we need to run the following command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ minikube dashboard&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Running this command will open up a browser with the Kubernetes &lt;strong&gt;Web UI&lt;/strong&gt;, which we can use to manage containerized applications. By default, the dashboard is connected to the default Namespace. Therefore, all the operations will be performed inside the default Namespace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyk5nzsrx8crkrlc9fuu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyk5nzsrx8crkrlc9fuu.png" alt="dashboard" width="800" height="402"&gt;&lt;/a&gt;&lt;br&gt;
Deploying an Application - Accessing the Dashboard&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; In case the browser is not opening another tab and does not display the Dashboard as expected, verify the output in your terminal as it may display a link for the Dashboard (together with some Error messages). Copy and paste that link in a new tab of your browser. Depending on your terminal's features you may be able to just click or right-click the link to open directly in the browser.&lt;/p&gt;

&lt;p&gt;The link may look similar to:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://127.0.0.1:40235/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/" rel="noopener noreferrer"&gt;http://127.0.0.1:40235/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Chances are that the only difference is the PORT number, which above is 40235. Your port number may be different.&lt;br&gt;
After a logout/login or a reboot of your workstation the expected behavior may be observed (where the minikube dashboard command directly opens a new tab in your browser displaying the Dashboard)&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying an Application Using the Dashboard (2)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Deploy a webserver using the nginx image. From the dashboard, click on the &lt;code&gt;+&lt;/code&gt; symbol at the top right corner of the Dashboard. That will open the create interface as seen below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijz47xvi473p7d629so9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijz47xvi473p7d629so9.png" alt="plus" width="800" height="382"&gt;&lt;/a&gt;&lt;br&gt;
Create a New Application - Interface&lt;/p&gt;

&lt;p&gt;From there, we can create an application using valid YAML/JSON configuration data, from a definition manifest file, or manually from the &lt;strong&gt;Create from form&lt;/strong&gt; tab. Click on the Create from form tab and provide the following application details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The application name is &lt;code&gt;web-dash&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The container image to use is &lt;code&gt;nginx&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The replica count, or the number of Pods, is 1.&lt;/li&gt;
&lt;li&gt;Service is External, Port 8080, Target port 80, Protocol TCP.
Namespace is &lt;code&gt;default&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fou8266ys3iqv8ghpdvcc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fou8266ys3iqv8ghpdvcc.png" alt="deploying" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm27nvqn4zsquolwh03nc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm27nvqn4zsquolwh03nc.png" alt="deploying" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff72o2p5cd93i05itlapv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff72o2p5cd93i05itlapv.png" alt="deploying" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fslx65qjk3p6rsxuh23p6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fslx65qjk3p6rsxuh23p6.png" alt="deploy" width="800" height="383"&gt;&lt;/a&gt;&lt;br&gt;
Deploy a Containerized Application - Interface&lt;/p&gt;

&lt;p&gt;If we click on Show &lt;strong&gt;Advanced Options&lt;/strong&gt;, we can specify options such as Labels, Namespace, Resource Requests, etc. By default, the Label is set to the application name. In our example k8s-app: web-dash Label is set to all objects created by this Deployment: &lt;code&gt;Pods&lt;/code&gt; and &lt;code&gt;Services&lt;/code&gt; (when exposed).&lt;/p&gt;

&lt;p&gt;By clicking on the Deploy button, we trigger the deployment. As expected, the Deployment web-dash will create a ReplicaSet (web-dash-74d8bd488f), which will eventually create 1 Pod replica (web-dash-74d8bd488f-dwbzz).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; Add the full URL in the Container Image field docker.io/library/nginx if any issues are encountered with the simple nginx image name (or use the k8s.gcr.io/nginx URL if it works instead).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; The resource names are unique and are provided for illustrative purposes only. The resources in your clusters and dashboards will display different names, but the naming structure follows the same convention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying an Application Using the Dashboard (3)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once we create the web-dash Deployment, we can use the resource navigation panel from the left side of the Dashboard to display details of &lt;strong&gt;Deployments&lt;/strong&gt;, &lt;strong&gt;ReplicaSets&lt;/strong&gt;, and &lt;strong&gt;Pods&lt;/strong&gt; in the default Namespace.&lt;/p&gt;

&lt;p&gt;From the Dashboard we can display individual objects’ properties by simply clicking the object’s name. From the commands menu symbol (the vertical three-dots) at the far-right we can easily manage their state. Easily scale up the Deployment to a higher number of &lt;strong&gt;replicas&lt;/strong&gt;, and observe the additional Pods spin up, or scale it down to fewer replicas. Attempt to delete one of the individual Pods of the Deployment. What do you notice after a few seconds? We can even delete the Deployment, an action that results in all its Pod replicas being terminated. But for now, let’s keep the Deployment so we can analyze it further.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24gr1zxbm7uesquptohx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24gr1zxbm7uesquptohx.png" alt="depore" width="800" height="382"&gt;&lt;/a&gt;&lt;br&gt;
Dashboard displaying Deployments, Pods, and ReplicaSets&lt;/p&gt;

&lt;p&gt;The resources displayed by the Dashboard match one-to-one resources displayed from the &lt;code&gt;CLI&lt;/code&gt; via &lt;code&gt;kubectl&lt;/code&gt;. List the Deployments. We can list all the Deployments in the default Namespace using the &lt;code&gt;kubectl&lt;/code&gt; get deployments command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get deployments&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;List the ReplicaSets. We can list all the ReplicaSets in the default Namespace using the kubectl get replicasets command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get replicasets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;List the Pods. We can list all the Pods in the default namespace using the kubectl get pods command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get pods&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;List Deployment, ReplicaSet and Pod with a single command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get deploy,rs,po&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01t9030zz0kvc12nvyun.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01t9030zz0kvc12nvyun.png" alt="resources" width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exploring Labels and Selectors (1)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Earlier, we have seen that labels and selectors play an important role in logically grouping a subset of objects to perform operations. Let's take a closer look at them.&lt;/p&gt;

&lt;p&gt;Display the Pod's details. We can look at an object's details using the kubectl describe command. In the following example, you can see a Pod's description:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl describe pod web-dash-6bf994f6&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yow8r43jk8mq6edsqiw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yow8r43jk8mq6edsqiw.png" alt="describe" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ymyng2oawfx977p90cj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ymyng2oawfx977p90cj.png" alt="describe" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8gjor0xgwz59cqwfnlv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8gjor0xgwz59cqwfnlv.png" alt="describe" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exploring Labels and Selectors (2)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;List the Pods, along with their attached Labels. With the &lt;code&gt;-L&lt;/code&gt; option to the kubectl get pods command, we add extra columns in the output to list Pods with their attached Label keys and their values. In the following example, we are listing Pods with the Label keys k8s-app and label2:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get pods -L k8s-app,label2&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All of the Pods are listed, as each Pod has the Label key k8s-app with value set to web-dash. We can see that in the K8S-APP column. As none of the Pods have the &lt;strong&gt;label2&lt;/strong&gt; Label key, no values are listed under the LABEL2 column.&lt;/p&gt;

&lt;p&gt;Select the Pods with a given Label. To use a selector with the kubectl get pods command, we can use the &lt;code&gt;-l&lt;/code&gt; option. In the following example, we are selecting all the Pods that have the k8s-app Label key set to value web-dash:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get pods -l k8s-app=web-dash&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21hpsk0qd5gzp1knv75v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21hpsk0qd5gzp1knv75v.png" alt="pods" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the example above, we listed all the Pods we created, as all of them have the k8s-app Label key set to value web-dash.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Try using k8s-app=webserver as the Selector&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get pods -l k8s-app=webserver&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffjjoqoqc7o9us27hndht.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffjjoqoqc7o9us27hndht.png" alt="no resources" width="800" height="437"&gt;&lt;/a&gt;&lt;br&gt;
No resources found.&lt;br&gt;
&lt;em&gt;As expected, no Pods are listed.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying an Application Using the CLI (1)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To deploy an application using the CLI, let's first delete the Deployment we created earlier.&lt;/p&gt;

&lt;p&gt;One method to delete the Deployment we created earlier is from the Dashboard, from the Deployment’s commands menu. Another method is using the kubectl delete command. Next, we are deleting the web-dash Deployment we created earlier:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl delete deployments web-dash&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Deleting a Deployment also deletes the ReplicaSet and the Pods it created:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get replicasets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get pods&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhe093jyy1im391x7jn0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhe093jyy1im391x7jn0.png" alt="delete" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this first installment, we were able to cover the Visual Orchestration &amp;amp; Lifecycle Basics. We established a rock-solid foundation for managing containerized applications. We moved beyond simple container execution and began exploring the automated world of Kubernetes orchestration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core Competencies Achieved&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Cluster Lifecycle Management:&lt;/strong&gt;&lt;br&gt;
Initiated and verified the local Kubernetes environment using minikube start and status. This confirmed the control plane, Kubelet, and API server were operational.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GUI-Driven Deployment:&lt;/strong&gt; Leveraged the Kubernetes Dashboard to deploy a standalone Nginx application. This demonstrated the "Create from Form" workflow, which simplifies resource definition for those new to the ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource Hierarchy Identification:&lt;/strong&gt; Observed the relationship between Deployments, ReplicaSets, and Pods. We verified how a single Deployment instruction automatically handles the creation of underlying ReplicaSets to ensure the desired state of our Pods.&lt;/p&gt;

&lt;p&gt;See you in the next part!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>minikube</category>
      <category>cloudnative</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Master the Linux Terminal for Modern Data Analytics</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Mon, 30 Mar 2026 13:56:35 +0000</pubDate>
      <link>https://forem.com/rahimah_dev/master-the-linux-terminal-for-modern-data-analytics-him</link>
      <guid>https://forem.com/rahimah_dev/master-the-linux-terminal-for-modern-data-analytics-him</guid>
      <description>&lt;h2&gt;
  
  
  INTRODUCTION
&lt;/h2&gt;

&lt;p&gt;In the high-stakes world of Data Analytics, your tools should never be your bottleneck. Most analysts can build a dashboard, but the elite 1% know how to handle data where it actually lives, that is, the &lt;strong&gt;Command Line&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Imagine a 10GB CSV file that crashes Excel on sight. While others wait for their GUI to load, &lt;em&gt;the modern analyst uses the Linux Terminal to slice, filter, and audit millions of rows in milliseconds&lt;/em&gt;. &lt;br&gt;
As a Data Analyst, I’ve realized that the CLI isn't just an &lt;strong&gt;'extra' skill&lt;/strong&gt;, it is the engine of efficiency in 2026. While prepping raw data for Power BI, mastering these 'black screen' secrets is how you move from being a passenger to being the pilot of your data infrastructure.&lt;/p&gt;

&lt;p&gt;Welcome to the world of Linux! Think of the Linux file system as an upside-down tree. Everything grows from a single point at the very top. What is the Root Directory? The root directory is the starting point of the entire Linux file system. &lt;br&gt;
&lt;em&gt;Every single file, folder, and drive on your computer is contained within it&lt;/em&gt;.&lt;br&gt;
It is represented by a single &lt;code&gt;forward slash&lt;/code&gt;: /.&lt;br&gt;
The "Parent": It has no parent directory, it is the absolute top level.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To get started, you have three ways to use these exact Linux commands on a Windows machine&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. WSL&lt;/strong&gt; &lt;br&gt;
WSL (Windows Subsystem for Linux) is a literal Linux system living inside your Windows computer. It’s what almost all developers use today.&lt;/p&gt;

&lt;p&gt;How to get it: Open your Windows Terminal and type &lt;code&gt;wsl --install&lt;/code&gt;.&lt;br&gt;
This runs the Result on your actual hard drive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Git Bash&lt;/strong&gt; &lt;br&gt;
If you install Git for Windows, it comes with &lt;code&gt;Git Bash&lt;/code&gt;. It’s a small emulator that lets you use Linux commands to navigate your Windows folders.&lt;/p&gt;

&lt;p&gt;In Git Bash, your C: drive is usually mapped to /c/.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.PowerShell&lt;/strong&gt; &lt;br&gt;
PowerShell actually has "aliases" for some Linux commands to make life easier for people moving between systems. It serves as a translator.&lt;br&gt;
&lt;strong&gt;NOTE&lt;/strong&gt;: Windows and Linux speak different "languages." though there are some similarities.&lt;br&gt;
Windows uses PowerShell or Command Prompt (CMD), where the root is usually C:. Linux uses the Bash shell, where the root is /.&lt;/p&gt;

&lt;p&gt;I'll be using Git Bash is actually one of the most popular ways for developers to use Linux commands on a Windows computer,and I already have it in VS Code.&lt;/p&gt;

&lt;p&gt;When you open Git Bash in VS Code, you are essentially running a "mini Linux environment" that can see your Windows files.&lt;/p&gt;

&lt;h2&gt;
  
  
  HOW TO USE IT
&lt;/h2&gt;

&lt;p&gt;Open the Terminal: In VS Code, press &lt;code&gt;Ctrl +&lt;/code&gt;`  (the backtick key). Or &lt;strong&gt;select View, then Terminal&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0113x46xwnkybutuwfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0113x46xwnkybutuwfd.png" alt="view" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Select Git Bash&lt;/strong&gt;: In the top-right corner of the terminal pane, click the dropdown arrow (usually says "powershell" or "cmd") and select &lt;code&gt;Git Bash&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ps98ymc1vsmvuaaswwo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ps98ymc1vsmvuaaswwo.png" alt="gitbash" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to explore these right now in your terminal, here is how you can use those commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Jump to Root: Type &lt;code&gt;cd&lt;/code&gt; / to move to the very top.&lt;/li&gt;
&lt;li&gt;See Where You Are: Type &lt;code&gt;pwd&lt;/code&gt;(Print Working Directory). It should just show /.&lt;/li&gt;
&lt;li&gt;Look Around: Type &lt;code&gt;ls&lt;/code&gt; to see all the folders (like bin, etc, and home) living inside the root. &lt;em&gt;These aren't your Windows C: drive folders, they are the virtual Linux-style folders Git Bash creates to make your commands work&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6xqubo5bsoeye7di9r8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6xqubo5bsoeye7di9r8.png" alt="root directory" width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is the most important part for a beginner. Git Bash "mounts" your Windows drives inside the root&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;To go to your C: Drive, type: &lt;strong&gt;cd /c/&lt;/strong&gt;&lt;br&gt;
To go to your Desktop, type: &lt;strong&gt;cd /c/Users/YourUsername/Desktop&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Change directory to the desktop&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;NOTE: My computer is using the username &lt;code&gt;Admin&lt;/code&gt; inside the folders, even though your machine name is &lt;code&gt;RAHIMAH-ISAH&lt;/code&gt;. Linux is very literal about where things are stored.&lt;br&gt;
So &lt;strong&gt;cd /c/Users/Admin/Desktop&lt;/strong&gt; is the full "map" to my Desktop&lt;/p&gt;

&lt;p&gt;where:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;cd: Change Directory.&lt;/li&gt;
&lt;li&gt;/c/: This is your C: Drive.&lt;/li&gt;
&lt;li&gt;Desktop: This is your destination.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frytov9pog4efen7kxamw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frytov9pog4efen7kxamw.png" alt="desktop" width="800" height="72"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To move from my current location into the &lt;code&gt;My_Analytics&lt;/code&gt; folder on the Desktop, I'll use the cd (change directory) command:&lt;br&gt;
&lt;strong&gt;cd My_Analytics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffv9ms20s6mxn764mbrhg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffv9ms20s6mxn764mbrhg.png" alt="My_analytics" width="800" height="141"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a "test" folder&lt;/strong&gt;&lt;br&gt;
To create a new folder (directory), use the &lt;code&gt;mkdir&lt;/code&gt; (make directory) command:&lt;br&gt;
&lt;strong&gt;mkdir test&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But first, change directory to the previous: &lt;code&gt;cd ..&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyeku1rzkwecm96lzseo3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyeku1rzkwecm96lzseo3.png" alt="testfolder" width="800" height="141"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to see if your new folder was actually created, type &lt;code&gt;ls&lt;/code&gt;. It will list everything in your current location, and you should see test appearing in the list.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgnt9ieh6s4og9n63dfz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgnt9ieh6s4og9n63dfz.png" alt="LS" width="800" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Out of curiosity, being a beginner it's allowed to check the Desktop&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvihtadnaldazttyr3o95.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvihtadnaldazttyr3o95.png" alt="Desktop" width="734" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's learn how to use these, try this "Real World" sequence in the VS Code Git Bash:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Go to your User folder&lt;/strong&gt;: &lt;strong&gt;cd ~&lt;/strong&gt; (The tilde ~ is a Linux shortcut for your home).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a project folder&lt;/strong&gt;: &lt;strong&gt;mkdir my-linux-practice2&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enter the folder&lt;/strong&gt;: &lt;strong&gt;cd my-linux-practice2&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdh4c44wk45dmfu2h0fz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdh4c44wk45dmfu2h0fz.png" alt="mkdir" width="800" height="187"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a blank file&lt;/strong&gt;: &lt;strong&gt;touch notes.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Verify it's there: &lt;strong&gt;ls&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynzgf76zlf5kjundl1za.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynzgf76zlf5kjundl1za.png" alt="blank file" width="800" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I made use of "-" instead of "_".&lt;br&gt;
Let's make the correction together.&lt;/p&gt;

&lt;p&gt;In Linux, we use the mv command (short for move) to rename files and folders.&lt;/p&gt;

&lt;p&gt;Since am currently on the Desktop, I can "move" the folder from the old name to the new name.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Correction Command&lt;/strong&gt;&lt;br&gt;
Type this and press Enter:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;mv linux-practice linux_practice&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;How it works&lt;/em&gt;&lt;br&gt;
The mv command follows a simple logic:&lt;br&gt;
&lt;strong&gt;mv [old_name] [new_name]&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;linux-practice&lt;/em&gt;: The folder as it exists now.&lt;br&gt;
&lt;em&gt;linux_practice&lt;/em&gt;: What you want it to be named.&lt;br&gt;
I did not remember the exact folder name, so I used the &lt;strong&gt;ls&lt;/strong&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F13xx41ydpc6khjzr990g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F13xx41ydpc6khjzr990g.png" alt="ls" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And then verified the name change by running &lt;strong&gt;ls&lt;/strong&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a message&lt;/strong&gt;&lt;br&gt;
We are going to use the &lt;code&gt;echo&lt;/code&gt; command. It literally "echoes" whatever you type back to you, but we are going to use a special symbol &lt;code&gt;&amp;gt;&lt;/code&gt; to tell it to &lt;code&gt;echo&lt;/code&gt; into a file instead.&lt;/p&gt;

&lt;p&gt;Remember to enter the Directory if you aren't already there:&lt;br&gt;
&lt;strong&gt;cd /c/Users/Admin/Desktop/notes.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;echo "Hello from the Linux terminal!" &amp;gt; notes.txt&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Read your file&lt;/strong&gt;&lt;br&gt;
Now, let's see if the file actually contains that message.&lt;/p&gt;

&lt;p&gt;Type this and press Enter:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;cat notes.txt&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;cat (short for concatenate)&lt;/code&gt; is the standard way to quickly read the contents of a file in the terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdrguvxd9su91tf4u57v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdrguvxd9su91tf4u57v.png" alt="echo" width="800" height="124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What just happened?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;echo "text": Prepared the message.&lt;/li&gt;
&lt;li&gt;&amp;gt;: This is called a Redirect. It took the message that would normally print on the screen and "poured" it into the file.&lt;/li&gt;
&lt;li&gt;cat: Showed you the result.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since we've already used &lt;code&gt;&amp;gt;&lt;/code&gt; to create the file, let's learn how to add a second line to it without deleting the first one.&lt;/p&gt;

&lt;p&gt;Step 1:Go back into your folder (if you left it)&lt;br&gt;
cd /c/Users/Admin/Desktop/notes&lt;/p&gt;

&lt;p&gt;Step 2:Add a new line (use two symbols &amp;gt;&amp;gt;)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&amp;gt; = Overwrites the file (deletes old stuff).&lt;/li&gt;
&lt;li&gt;&amp;gt;&amp;gt; = Appends (adds to the bottom).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;echo "This is my second line!" &amp;gt;&amp;gt; notes.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 3:Read it (check your spelling!)&lt;br&gt;
&lt;strong&gt;cat notes.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncuyn3bqhlcfj4ua1pqv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncuyn3bqhlcfj4ua1pqv.png" alt="secondline" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use echo with &amp;gt;&amp;gt; to add another line:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;echo "This is my third line!" &amp;gt;&amp;gt; notes.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Verify the result:&lt;br&gt;
If you run &lt;code&gt;cat notes.txt&lt;/code&gt; now, you should see:&lt;/p&gt;

&lt;p&gt;Hello from me!&lt;br&gt;
This is my second line.&lt;br&gt;
This is my third line!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;:&lt;br&gt;
In Linux, &lt;em&gt;case sensitivity is everything&lt;/em&gt;. If you create a file named Notes.txt (with a capital N) and then try to read notes.txt (with a lowercase n), the terminal thinks they are two completely different files.&lt;br&gt;
When you type a command like cat and press Enter, the terminal will seem to "freeze." This is because cat without a filename waits for you to type something into it.&lt;/p&gt;

&lt;p&gt;Whenever a command gets stuck like that, press &lt;code&gt;Ctrl + C&lt;/code&gt; on your keyboard to kill the process and get your prompt back.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hidden files&lt;/strong&gt;&lt;br&gt;
In Linux, you can make a file "hidden" just by starting its name with a &lt;strong&gt;period (.)&lt;/strong&gt;. These are usually used for important system settings that you don't want to see cluttering your folders.&lt;/p&gt;

&lt;p&gt;Step 1: Create a hidden file (in my folder my_linux_practice2)&lt;br&gt;
Type this and press Enter:&lt;br&gt;
&lt;strong&gt;touch .secret_note.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 2:Try to find it with a normal ls&lt;br&gt;
Type this:&lt;br&gt;
&lt;strong&gt;ls&lt;/strong&gt;&lt;br&gt;
(Notice that it doesn't show up, even though it's there.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscxl324l18s491pttosq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscxl324l18s491pttosq.png" alt="secretfile" width="800" height="152"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 3:Reveal the hidden files&lt;br&gt;
To see everything (including hidden files), you need to add a "flag" to your command.&lt;br&gt;
Type this:&lt;br&gt;
&lt;strong&gt;ls -a&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-a&lt;/code&gt;: stands for "all".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0r1za65jrkz7aaiaaxf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0r1za65jrkz7aaiaaxf.png" alt="toseeall" width="800" height="130"&gt;&lt;/a&gt;&lt;br&gt;
&lt;code&gt;.&lt;/code&gt;: This represents the current directory you are in.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;..&lt;/code&gt;: This represents the parent directory (one level up).&lt;/p&gt;

&lt;p&gt;&lt;code&gt;.my_secret_file.txt&lt;/code&gt;: The brand new hidden file!&lt;/p&gt;

&lt;p&gt;Being a Data Analyst and Cloud Engineer, I see these "dot files" (like .git or .env) all the time in your professional work. Knowing how to find them using ls -a is a critical skill.&lt;/p&gt;

&lt;p&gt;To delete files in Linux, we use the &lt;code&gt;rm&lt;/code&gt; command (short for remove).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Be careful&lt;/strong&gt;: unlike Windows, there is no "Recycle Bin" in the Linux terminal. Once you delete a file with this command, it is gone for good!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Deletion Command&lt;/strong&gt;&lt;br&gt;
Type this and press Enter:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;rm .secret_note.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Verify it is gone&lt;br&gt;
Since this was a hidden file, a normal ls wouldn't have shown it anyway. To be 100% sure it’s deleted, you need to use the "all" flag again.&lt;/p&gt;

&lt;p&gt;Type this:&lt;br&gt;
&lt;strong&gt;ls -a&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flir6c4vlv9nnbsc5nwyw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flir6c4vlv9nnbsc5nwyw.png" alt="delete" width="800" height="124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What you should see: notes.txt, plus the system markers &lt;code&gt;.&lt;/code&gt; and &lt;code&gt;..&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;What should be missing?: .secret_note.txt.&lt;/p&gt;

&lt;p&gt;While working, for example, with Azure and Power BI, you'll often have folders full of data files. If you ever need to delete an entire folder and everything inside it, you have to add a "recursive" flag:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;rm -r folder_name&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Warning&lt;/strong&gt;: Never type rm -rf /. This tells Linux to "Force Delete Everything starting from the Root," which would erase your entire operating system!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copying is another fundamental skill&lt;/strong&gt;, especially when you want to create backups of your scripts or data reports before you make changes.&lt;/p&gt;

&lt;p&gt;In Linux, we use the &lt;code&gt;cp&lt;/code&gt; (copy) command.&lt;/p&gt;

&lt;p&gt;Step 1: Create a simple copy&lt;br&gt;
Let's take the existing &lt;code&gt;notes.txt&lt;/code&gt; and create a backup called &lt;code&gt;backup.txt&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Type this and press Enter:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;cp notes.txt backup.txt&lt;/strong&gt;&lt;br&gt;
Step 2: Verify the copy&lt;br&gt;
Now, let's see if you have two separate files now.&lt;/p&gt;

&lt;p&gt;Type this:&lt;br&gt;
&lt;strong&gt;ls&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You should see both notes.txt and backup.txt listed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyl4xy5egnwx2udhd0nca.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyl4xy5egnwx2udhd0nca.png" alt="copy" width="800" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copy into a new folder&lt;/strong&gt;&lt;br&gt;
Now let's get a bit more organized. Let's create a "logs" folder and copy the file into it.&lt;/p&gt;

&lt;p&gt;Step 1:&lt;br&gt;
Create the folder:&lt;br&gt;
&lt;strong&gt;mkdir logs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 2:&lt;br&gt;
Copy the file into the folder:&lt;br&gt;
&lt;strong&gt;cp notes.txt logs/&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 3:&lt;br&gt;
Check inside the logs folder:&lt;br&gt;
&lt;strong&gt;ls logs&lt;/strong&gt;&lt;br&gt;
This tells ls to look specifically inside the logs directory without you having to &lt;code&gt;cd&lt;/code&gt; into it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4e0c9fu4ch8ghy969my6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4e0c9fu4ch8ghy969my6.png" alt="copytofolder" width="800" height="233"&gt;&lt;/a&gt;&lt;br&gt;
Now the logs folder is no longer empty, it contains the notes.txt file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "Dot" Trick&lt;/strong&gt;&lt;br&gt;
If you are already inside a folder and want to copy a file from somewhere else into your current spot, you use a &lt;strong&gt;period&lt;/strong&gt; . (which means "here").&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example (Try this on your own)&lt;/strong&gt;:&lt;br&gt;
cp /c/Users/Admin/Desktop/important.txt .&lt;br&gt;
(This translates to: "Copy important.txt from the Desktop to here.")&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copying a folder&lt;/strong&gt;&lt;br&gt;
Since we are copying a folder (the logs folder) instead of a single file, we need to use a special flag. In Linux, if you try to copy a folder without this flag, the terminal will give you an error saying "omitting directory."&lt;/p&gt;

&lt;p&gt;To copy a directory and everything inside it, we use &lt;strong&gt;-r&lt;/strong&gt; (which stands for recursive).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Copy Directory Commands&lt;/strong&gt;&lt;br&gt;
Step 1: Copy the logs folder to a new name&lt;br&gt;
Type this and press Enter:&lt;br&gt;
&lt;strong&gt;cp -r logs logs_backup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 2:Verify both folders exist&lt;br&gt;
Type this:&lt;br&gt;
&lt;strong&gt;ls -F&lt;/strong&gt;&lt;br&gt;
(The -F flag is a neat trick,it adds a &lt;code&gt;/&lt;/code&gt; to the end of folder names so you can easily tell them apart from files!)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktibb7g5q8tbmdn7izzn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktibb7g5q8tbmdn7izzn.png" alt="copyfolder" width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 3:Copy a file from one folder to another&lt;br&gt;
Let's practice moving things between folders without leaving your current spot. Let's copy the file inside logs into logs_backup but give it a new name.&lt;/p&gt;

&lt;p&gt;Type this:&lt;br&gt;
&lt;strong&gt;cp logs/notes.txt logs_backup/archive_copy.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 4:Check the contents of the backup folder&lt;br&gt;
&lt;strong&gt;ls logs_backup&lt;/strong&gt;&lt;br&gt;
You should now see both &lt;code&gt;notes.txt&lt;/code&gt; and &lt;code&gt;archive_copy.txt&lt;/code&gt; inside that folder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqhrfqufxicazbs13ta3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqhrfqufxicazbs13ta3.png" alt="backup" width="800" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "Tab" Trick&lt;/strong&gt;&lt;br&gt;
To avoid typos (like lowercase vs. uppercase), try this:&lt;br&gt;
Type cd lin and then press the Tab key on your keyboard. Git Bash will automatically finish the word linux_practice for you! It’s like magic and prevents almost all errors. You can try it with the first few letters of your file and folder names.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating Multiple folders&lt;/strong&gt;&lt;br&gt;
Creating multiple folders at once is a huge time-saver for any &lt;strong&gt;Data Analyst&lt;/strong&gt;. Instead of typing &lt;code&gt;mkdir&lt;/code&gt; five separate times, we can do it in one single line.&lt;/p&gt;

&lt;p&gt;In Linux, there are two ways to do this: the Simple List and the Brace Expansion (the "Pro" way).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 1: The Simple List&lt;/strong&gt;&lt;br&gt;
You can simply type &lt;code&gt;mkdir&lt;/code&gt; followed by all the names you want, separated by spaces.&lt;br&gt;
&lt;strong&gt;mkdir Jan Feb Mar Apr May Jun&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F502rwly07adc0ik21mli.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F502rwly07adc0ik21mli.png" alt="multiplefolders" width="800" height="319"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Result: 6 new folders appear instantly.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 2: Brace Expansion {} (The Power Move)&lt;/strong&gt;&lt;br&gt;
This is how engineers create hundreds of folders in a second. It uses curly brackets to tell Linux: "Take this prefix and attach all these options to it."&lt;/p&gt;

&lt;p&gt;Try creating six months of data folders like this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;mkdir month_{1..6}&lt;/strong&gt;&lt;br&gt;
Result: You will get month_1, month_2, up to month_6.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv97fow779heh5cc1uqiq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv97fow779heh5cc1uqiq.png" alt="method2" width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 3: Nested Folders (The -p Flag)&lt;/strong&gt;&lt;br&gt;
Sometimes you want to create a folder inside a folder that doesn't exist yet (like a file path). If you just try mkdir Project/Data, it will fail. You need the -p (parents) flag.&lt;/p&gt;

&lt;p&gt;Try this to build a full project structure in one go this way:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;mkdir -p Analytics_Project/{customer,date,region,price}&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdzptjnsmtobvi2kpavp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdzptjnsmtobvi2kpavp.png" alt="third" width="800" height="70"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What happened?&lt;/strong&gt; &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It created the main folder Analytics_Project.&lt;/li&gt;
&lt;li&gt;Inside it, it created 4 sub-folders: customer, date, region, and price.
To see your beautiful new structure without clicking around your Desktop, use the "Recursive List" command:
&lt;strong&gt;ls -R Cloud_Project&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbi49zgwim5z28ovipbkf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbi49zgwim5z28ovipbkf.png" alt="list" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhv7ed30x5fi1vhzbvj5x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhv7ed30x5fi1vhzbvj5x.png" alt="checking" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating multiple folders, knowing &lt;em&gt;how to clean them up&lt;/em&gt; is just as important. In Linux, there are two main ways to delete a directory, depending on whether it has files inside it or not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 1: The "Safety" Way (rmdir)&lt;/strong&gt;&lt;br&gt;
If a folder is completely empty, you use rmdir (remove directory). This is safe because Linux will refuse to run the command if there is even one tiny file inside, preventing accidental data loss.&lt;/p&gt;

&lt;p&gt;Try it on one of your empty month folders:&lt;br&gt;
&lt;strong&gt;rmdir Jan&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 2: The "Force" Way (rm -r)&lt;/strong&gt;&lt;br&gt;
If the folder has files, scripts, or other folders inside it, rmdir won't work. You must use the rm command with the -r (recursive) flag. This tells Linux to go "inside" the folder and delete everything first, then delete the folder itself.&lt;/p&gt;

&lt;p&gt;Try it on your Analytics_Project folder:&lt;br&gt;
&lt;strong&gt;rm -r Analytics_Project&lt;/strong&gt;&lt;br&gt;
A Critical Warning for Cloud Engineers and Data Analysts&lt;br&gt;
In Linux, there is no "Recycle Bin." Once you run rm -r, that data is gone forever.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "Danger" Command&lt;/strong&gt;:&lt;br&gt;
You will often see &lt;code&gt;rm -rf&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-r&lt;/code&gt;: Recursive (deletes folders).&lt;br&gt;
&lt;code&gt;-f&lt;/code&gt;: Force (doesn't ask "Are you sure?").&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Always double-check your current location with pwd before running a recursive delete&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "Real-life" Challenge&lt;/strong&gt;&lt;br&gt;
We are going to simulate a real-world task: Organizing a project*&lt;em&gt;.&lt;br&gt;
Step 1: Create the Workspace&lt;br&gt;
Create a main project folder and two sub-folders in one line:&lt;br&gt;
**mkdir -p My_Analytics/{raw_data,final_reports}&lt;/em&gt;*&lt;/p&gt;

&lt;p&gt;Step 2:Create a "Data" file&lt;br&gt;
Let’s create a dummy data file inside the raw_data folder:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;touch My_Analytics/raw_data/sales_2026.csv&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 3:Copy the file (The "Backup" Move)&lt;br&gt;
Before you edit data, you should always have a copy. Use cp (copy)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;cp My_Analytics/raw_data/sales_2026.csv My_Analytics/raw_data/sales_backup.csv&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 4:Move the file (The "Organization" Move)&lt;br&gt;
Now, let's pretend you finished your analysis. Move the original file to the final_reports folder using mv (move):&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;mv My_Analytics/raw_data/sales_2026.csv My_Analytics/final_reports/&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Check Your Work&lt;br&gt;
Use the Tree view (or recursive list) to see your organized project:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ls -R My_Analytics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mtl8j0z60dnb9el05dg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mtl8j0z60dnb9el05dg.png" alt="reallife" width="800" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since you've already learned how to create, move, and delete folders, the next "superpower" for a Data Analyst is being able to see what is inside a file without needing to open a heavy application like Excel or Notepad.&lt;/p&gt;

&lt;p&gt;Imagine you just downloaded a massive dataset from an Azure storage bucket. You need to know if it's the right data before you start your analysis.&lt;/p&gt;

&lt;p&gt;Practice 1: Creating a "Data" File&lt;br&gt;
First, let's create a file with some actual content inside it so we have something to look at.&lt;/p&gt;

&lt;p&gt;Type this command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;echo -e "ID,Name,Sales\n1,Rahimah,500\n2,Ibrahim,750\n3,Dickson,300" &amp;gt; sales_data.csv&lt;/strong&gt;&lt;br&gt;
What this does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;echo prints text.&lt;/li&gt;
&lt;li&gt;-e allows for "new lines" (\n).&lt;/li&gt;
&lt;li&gt;&amp;gt; saves that text into a new file called sales_data.csv.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Practice 2:The "Peeking" Commands&lt;br&gt;
Now, let's look at the data using three different tools.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The cat Command (Concatenate)
This dumps the entire file onto your screen.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;cat sales_data.csv&lt;/strong&gt;&lt;br&gt;
Use this when: The file is small (like a configuration file).&lt;/p&gt;

&lt;p&gt;2.The head Command&lt;br&gt;
This only shows the first few lines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;head -n 2 sales_data.csv&lt;/strong&gt;&lt;br&gt;
Use this when you have a 1-million-row CSV and just want to see the column headers.&lt;/p&gt;

&lt;p&gt;3.The tail Command&lt;br&gt;
This shows the very end of the file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;tail -n 1 sales_data.csv&lt;/strong&gt;&lt;br&gt;
Use this when: You want to see the most recent entry in a log file.&lt;/p&gt;

&lt;p&gt;Practice 3:The "Search" Power (grep)&lt;br&gt;
This is the command Data Analysts use most. It searches for a specific word inside a file. Suppose you only want to see the sales for "Ibrahim".&lt;/p&gt;

&lt;p&gt;Type this:&lt;br&gt;
&lt;strong&gt;grep "Ibrahim" sales_data.csv&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;: It will ignore everything else and only show you the row for Ibrahim.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4ilhqxm96x6gj9vp2e1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4ilhqxm96x6gj9vp2e1.png" alt="head" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It actually created an .csv file, amazing!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F131lwcyboqut90ewqcog.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F131lwcyboqut90ewqcog.png" alt="excel" width="800" height="661"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgt7jyioilbdjrk5t9rl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgt7jyioilbdjrk5t9rl.png" alt="excel" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters for you&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Analysts love finding needles in haystacks&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;Imagine searching a 2GB file for a specific Transaction ID. In Excel, your computer freezes. In the CLI, grep 'TXN_9984' data.csv finds it in milliseconds. &lt;br&gt;
When you are working as a Data Analyst, you might have thousands of logs. Instead of scrolling through them, you use grep &lt;code&gt;Error&lt;/code&gt; to find exactly where something went wrong in your Azure pipeline.&lt;/p&gt;

&lt;p&gt;Final Command for today: &lt;strong&gt;history&lt;/strong&gt;&lt;br&gt;
Want to see a list of everything you've done in this session?&lt;/p&gt;

&lt;p&gt;Type this in your Git Bash:&lt;br&gt;
&lt;strong&gt;history&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vmj8wzzgozhy4lo6kwo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vmj8wzzgozhy4lo6kwo.png" alt="history" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CONCLUSION
&lt;/h2&gt;

&lt;p&gt;Mastering the Linux Terminal is more than just learning a list of commands; it is about adopting a mindset of efficiency and automation. In an era where data volumes are exploding and cloud infrastructure is the standard, the ability to navigate a server, audit a massive CSV, or automate a directory structure is what separates a traditional analyst from a modern data professional.&lt;/p&gt;

&lt;p&gt;As you move from the GUI to the CLI, you aren't just changing how you interact with your computer—you are expanding your capacity to handle "Big Data" that others simply cannot touch. Whether you are building pipelines in Azure, managing repositories on GitHub, or cleaning raw data for Power BI, the terminal is the bridge that connects your analytical skills to the global tech ecosystem.&lt;/p&gt;

&lt;p&gt;The Challenge&lt;br&gt;
Don't let these commands sit idle. Open your Git Bash today, navigate to your projects using only your keyboard. &lt;/p&gt;

</description>
      <category>dataanalytics</category>
      <category>linux</category>
      <category>bash</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Creating Azure Resources via Azure CLI: Part 3</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Thu, 19 Mar 2026 15:28:22 +0000</pubDate>
      <link>https://forem.com/rahimah_dev/creating-azure-resources-via-azure-cli-part-3-237e</link>
      <guid>https://forem.com/rahimah_dev/creating-azure-resources-via-azure-cli-part-3-237e</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In Part 3 of this series, I continue building a fully functional Azure environment using only the Azure CLI, expanding on the resources created in earlier parts. This phase focuses on working with storage, securing sensitive data, and implementing operational best practices.&lt;/p&gt;

&lt;p&gt;You’ll see how to create and interact with a storage account, upload and manage files in Blob Storage, securely handle secrets using Azure Key Vault, and explore cost management strategies. Along the way,&lt;em&gt;I also highlight real-world challenges like RBAC permission barriers and subscription limitations&lt;/em&gt;. I'll be showing you how to navigate them effectively as a cloud engineer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Storage Account &amp;amp; Upload Files
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create the Storage Account&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This will generate a unique random string and create a storage account in Azure.&lt;/p&gt;

&lt;p&gt;This is very important because storage accounts provide the scalable backend object storage required for storing logs, backups, container apps, and static assets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reliability&lt;/strong&gt; - &lt;em&gt;Local Redundant Storage (LRS)&lt;/em&gt; keeps 3 synchronized copies of your data across multiple fault domains in a single data center.&lt;br&gt;
Protection: Guards against hardware failures (e.g., disk crash).&lt;br&gt;
Limitation: If the entire data center goes down, your data may be lost.&lt;br&gt;
Use case: Cheapest option, good for non-critical data.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Zone-Redundant Storage (ZRS)&lt;/em&gt;&lt;br&gt;
&lt;strong&gt;Reliability&lt;/strong&gt; - It replicates your data across multiple availability zones in the same region.&lt;br&gt;
Protection: Survives data center failure (since zones are separate).&lt;br&gt;
Advantage: It has a higher availability than LRS.&lt;br&gt;
Use case: Applications that need high availability within a region.&lt;br&gt;
&lt;em&gt;Geo-Redundant Storage (GRS)&lt;/em&gt; (often called regional redundancy)&lt;br&gt;
&lt;strong&gt;Reliability&lt;/strong&gt; - It copies your data to a secondary region far away from the primary region.&lt;br&gt;
Protection: Survives regional outages (e.g., natural disasters).&lt;br&gt;
Bonus: Some versions allow read access to the secondary region, that is, RA-GRS.&lt;br&gt;
Use case: Critical data requiring disaster recovery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick Comparison&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Type&lt;/strong&gt;    &lt;strong&gt;Copies Location&lt;/strong&gt;        &lt;strong&gt;Protection Level&lt;/strong&gt;&lt;br&gt;
LRS         Single data center,        Low protection&lt;br&gt;
ZRS         Multiple zones (same region),  Medium protection&lt;br&gt;
GRS         Different regions,         High protection&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Run the following command block to create a storage account&lt;/em&gt;:&lt;br&gt;
&lt;strong&gt;$STORAGE_NAME="labstoragefeb26"&lt;br&gt;
az storage account create &lt;br&gt;
--name $STORAGE_NAME &lt;br&gt;
--resource-group azurecli-lab-rg&lt;br&gt;
--kind StorageV2 &lt;br&gt;
--location korecentral &lt;br&gt;
--sku Standard_LRS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fulajmv12b9twry00oot9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fulajmv12b9twry00oot9.png" alt="storageaccount" width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsyyo46uo4lrndrb3etg0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsyyo46uo4lrndrb3etg0.png" alt="sa" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkj7bpdemcurkocels7m4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkj7bpdemcurkocels7m4.png" alt="sa" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2q6s0oqtb0o6nv0honx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2q6s0oqtb0o6nv0honx.png" alt="sa" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2462svlqhf0418neclu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2462svlqhf0418neclu.png" alt="sa" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:Create a Blob Container&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This creates a logical folder/bucket inside the storage account.&lt;/p&gt;

&lt;p&gt;It's needed because you cannot upload blobs directly to the storage account root, they must live inside a container.&lt;/p&gt;

&lt;p&gt;Security — containers provide access boundaries allowing RBAC segmentation.&lt;/p&gt;

&lt;p&gt;To upload a file into an Azure Blob Storage container, use the &lt;strong&gt;az storage blob upload command&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;az storage container create &lt;br&gt;
--name lab-files &lt;br&gt;
--account-name $STORAGE_NAME &lt;br&gt;
--auth-mode login&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;auth-mode login&lt;/code&gt; : This tells Azure to use your current &lt;em&gt;az login credentials&lt;/em&gt; rather than an access key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbjrjt8277c4cbpgjykk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbjrjt8277c4cbpgjykk.png" alt="blobcontainer" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:Upload a file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This will locally scaffold a text file, then pushes it to Azure and stores it as a blob.&lt;br&gt;
It's very much needed because Azure Blob storage is the most common storage mechanism for handling files (like images, docs, and backups).&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Operational Excellence — automated asset and artifact uploads.&lt;br&gt;
Run:&lt;br&gt;
&lt;strong&gt;az storage blob upload &lt;br&gt;
--account-name $STORAGE_NAME &lt;br&gt;
--container-name lab-files &lt;br&gt;
--name sample.txt &lt;br&gt;
--file "C:\Users\Admin\Documents\New folder\sample.txt"&lt;br&gt;
--auth-mode login&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkw09shss94ugnmtfsmn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkw09shss94ugnmtfsmn.png" alt="blobupload" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;:This error shows that I hit a permissions wall.&lt;/p&gt;

&lt;p&gt;The error "You do not have the required permissions" happens because, in Azure, being the "Owner" of a subscription doesn't automatically give you the right to upload data inside a storage account when using &lt;code&gt;auth-mode login&lt;/code&gt;. You need a specific Data Plane role.&lt;br&gt;
The solution is to assign the "Storage Blob Data Contributor" Role&lt;br&gt;
You need to give yourself permission to handle the actual data inside the blobs. &lt;br&gt;
&lt;strong&gt;RBAC (Role-Based Access Control)&lt;/strong&gt;: Azure separates "Management" (creating the storage account) from "Data" (uploading files).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Storage Blob Data Contributor role is exactly what the error message is asking for so you can upload, read, and delete blobs&lt;/em&gt;.&lt;br&gt;
If you decide to go the Role Assignment route,note that propagation time after running the command takes about 1–2 minutes. Role assignments in Azure can take a moment to "settle" across the global network.&lt;br&gt;
I will go with the alternative (The "Key" Method).&lt;br&gt;
I don't want to deal with roles right now, so I can bypass this by using the storage account's &lt;strong&gt;Access Key&lt;/strong&gt; instead of my login:&lt;br&gt;
Run this command first to get the key:&lt;br&gt;
&lt;strong&gt;$ACCOUNT_KEY=$(az storage account keys list --account-name $STORAGE_NAME --query "[0].value" -o tsv)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Then, Upload using the key:&lt;br&gt;
&lt;strong&gt;az storage blob upload &lt;br&gt;
--account-name $STORAGE_NAME &lt;br&gt;
--account-key $ACCOUNT_KEY &lt;br&gt;
--container-name lab-files &lt;br&gt;
--name sample.txt &lt;br&gt;
--file "C:\Users\Admin\Documents\New folder\sample.txt"&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cexcbgbd5joivit3dcb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cexcbgbd5joivit3dcb.png" alt="blobupload" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That success message is exactly what I was looking for! The &lt;strong&gt;100.0000%&lt;/strong&gt; progress bar and the JSON output confirm that your file, &lt;code&gt;sample.txt&lt;/code&gt;, has been successfully uploaded to the lab-files container in Azure.&lt;/p&gt;

&lt;p&gt;After the upload using the Access Key method, the next logical step and good practice is to confirm the file is visible in the cloud. This takes us to the next step..&lt;br&gt;
&lt;strong&gt;Step 4:List blobs in the container&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This queries the Azure storage API for the contents of the container.&lt;/p&gt;

&lt;p&gt;It's needed as a verification step to ensure your push succeeded.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Operational Excellence — automated verification.&lt;br&gt;
Run this command to list all blobs in your container:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;az storage blob list &lt;code&gt;&lt;br&gt;
--account-name $STORAGE_NAME&lt;/code&gt;&lt;br&gt;
--account-key $ACCOUNT_KEY &lt;code&gt;&lt;br&gt;
--container-name lab-files&lt;/code&gt;&lt;br&gt;
--output table&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftob5hzjktndpn8rn6xdp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftob5hzjktndpn8rn6xdp.png" alt="verify" width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If this was just a test, you should know how to remove the file to keep your storage environment clean. To achieve this, run:&lt;br&gt;
&lt;strong&gt;az storage blob delete &lt;br&gt;
--account-name $STORAGE_NAME &lt;br&gt;
--account-key $ACCOUNT_KEY &lt;br&gt;
--container-name lab-files &lt;br&gt;
--name sample.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By using the $ACCOUNT_KEY, I &lt;em&gt;bypassed&lt;/em&gt; the complicated RBAC permissions (Roles) that were blocking you earlier. &lt;br&gt;
While roles are safer for large teams, using the key is the fastest way to get things done in a personal lab environment, such as this one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Store Secrets in Azure Key Vault
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1:Create a Key Vault&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This provisions an Azure Key Vault instance.&lt;/p&gt;

&lt;p&gt;It's needed credentials, connection strings, certificates and API keys must Never be hard-coded. They belong in Key Vaults.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Security — secure secret storage.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Recall&lt;/em&gt; that a Key Vault stores certificates, keys and secrets.&lt;br&gt;
 Also note that key vault names must be globally unique, just like storage accounts.&lt;br&gt;
Run this command block:&lt;br&gt;
&lt;strong&gt;$KV_NAME="labkvrahfeb26"&lt;br&gt;
az keyvault create &lt;br&gt;
  --name $KV_NAME &lt;br&gt;
  --resource-group azurecli-lab-rg &lt;br&gt;
  --location koreacentral &lt;br&gt;
  --enable-rbac-authorization false&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcw0fv44olsdnpy2qcgfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcw0fv44olsdnpy2qcgfd.png" alt="kvcreate" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2y2kwhqn3xffjsbhmu2d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2y2kwhqn3xffjsbhmu2d.png" alt="create" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpqdm7p4s0qr3k3poqmo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpqdm7p4s0qr3k3poqmo.png" alt="created" width="800" height="343"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 2:Store a secret&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It ingests the &lt;code&gt;db-password&lt;/code&gt; secret securely.&lt;/p&gt;

&lt;p&gt;It's needed because it provides safe retrieval instead of storing cleartext credentials locally.&lt;/p&gt;

&lt;p&gt;Security — ensuring robust secrets lifecycle.&lt;br&gt;
Run this command:&lt;br&gt;
&lt;strong&gt;az keyvault secret set &lt;br&gt;
  --vault-name $KV_NAME &lt;br&gt;
  --name db-password &lt;br&gt;
  --value 'SuperSecure@pass123'&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgecv67zfqgzcjxmvtvus.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgecv67zfqgzcjxmvtvus.png" alt="az kv" width="800" height="386"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 3: Retrieve the secret&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It fetches the decrypted plain text variable value securely using your currently authenticated user.&lt;/p&gt;

&lt;p&gt;It proves the CLI works to securely obtain values from Key Vault context.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Security — programmatic retrieval over TLS.&lt;br&gt;
Run the following command:&lt;br&gt;
&lt;strong&gt;az keyvault secret show &lt;br&gt;
  --vault-name $KV_NAME &lt;br&gt;
  --name db-password &lt;br&gt;
  --query value &lt;br&gt;
  --output tsv&lt;/strong&gt; &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9592xzocqqp49p79zmbb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9592xzocqqp49p79zmbb.png" alt="secretshow" width="800" height="164"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 4:Assign VM a Managed Identity to access the vault&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This step configures Azure AD to grant the VM identity-based permissions to extract secrets.&lt;/p&gt;

&lt;p&gt;It's needed because it allows background services in the VM to get the secret later without logging in themselves.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Security — Zero-credential deployment utilizing Managed Identities.&lt;br&gt;
Run this command:&lt;br&gt;
&lt;strong&gt;az vm identity assign &lt;br&gt;
  --resource-group azurecli-lab-rg &lt;br&gt;
  --name lab-vm &lt;br&gt;
$PRINCIPAL_ID=$(az vm show &lt;br&gt;
  --resource-group azurecli-lab-rg &lt;br&gt;
  --name lab-vm &lt;br&gt;
  --query identity.principalId &lt;br&gt;
  --output tsv)&lt;br&gt;
az role assignment create &lt;br&gt;
  --role 'Key Vault Secrets User' &lt;br&gt;
  --assignee $PRINCIPAL_ID &lt;br&gt;
  --scope $(az keyvault show --name $KV_NAME --query id --output tsv)&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fis6qlaygpprowtsjesgg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fis6qlaygpprowtsjesgg.png" alt="managedidentity" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsg0drht1svcsjumd9dy0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsg0drht1svcsjumd9dy0.png" alt="mid" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitor Costs &amp;amp; Set a Budget Alert
&lt;/h2&gt;

&lt;p&gt;Step 1:Get your subscription ID&lt;/p&gt;

&lt;p&gt;This queries the active subscription ID programmatically.&lt;/p&gt;

&lt;p&gt;Why It's Needed&lt;br&gt;
Required when sending alerts so it explicitly monitors current active account.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Cost Optimization — understanding which account you are billing towards.&lt;br&gt;
Run: &lt;strong&gt;SUB_ID=$(az account show --query id --output tsv)&lt;br&gt;
echo "Subscription: $SUB_ID"&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgblk7azbsczrw7f6e92a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgblk7azbsczrw7f6e92a.png" alt="subID" width="800" height="69"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:Create a $10 monthly budget with an alert at 80%&lt;/strong&gt;&lt;br&gt;
This sets a strict ceiling for consumption using native Azure limits.&lt;/p&gt;

&lt;p&gt;Why It's Needed&lt;br&gt;
Setting alerts &lt;em&gt;prevents surprise billing&lt;/em&gt; caused by unmonitored rogue or misconfigured compute arrays.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Cost Optimization — preventative guard-rails ensuring fiscal control.&lt;br&gt;
Setting a budget is the "responsible" side of being a &lt;strong&gt;Cloud Engineer&lt;/strong&gt;. It proves you aren't just building things, you're managing the &lt;strong&gt;Cost Management&lt;/strong&gt; aspect of the cloud, which is a major focus for businesses in 2026.&lt;br&gt;
Run:&lt;br&gt;
&lt;strong&gt;az consumption budget create &lt;br&gt;
  --budget-name lab-budget &lt;br&gt;
  --amount 10 &lt;br&gt;
  --category Cost &lt;br&gt;
  --time-grain Monthly &lt;br&gt;
  --start-date (Get-Date -Format "yyyy-MM-01") &lt;br&gt;
  --end-date 2026-12-31 &lt;br&gt;
  --resource-group azurecli-lab-rg&lt;br&gt;
  --notifications '[{"enabled":true,"operator":"GreaterThan","threshold":80,"contactEmails":["&lt;a href="mailto:you@example.com"&gt;you@example.com&lt;/a&gt;"]}]'&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw73q9okcv9cwrlxpamkg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw73q9okcv9cwrlxpamkg.png" alt="unrecognizedargument:notification" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The output shows notification errors so we'll run another command. This version avoids all the JSON/notification issues that broke earlier.&lt;br&gt;
**$subId = az account show --query id -o tsv&lt;/p&gt;

&lt;p&gt;az consumption budget create &lt;code&gt;&lt;br&gt;
  --budget-name "lab-budget"&lt;/code&gt;&lt;br&gt;
  --amount 10 &lt;code&gt;&lt;br&gt;
  --category Cost&lt;/code&gt;&lt;br&gt;
  --time-grain Monthly &lt;code&gt;&lt;br&gt;
  --start-date "2026-03-01"&lt;/code&gt;&lt;br&gt;
  --end-date "2026-12-31" `&lt;br&gt;
  --subscription $subId**&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h3tymvscx9l763i0amn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h3tymvscx9l763i0amn.png" alt="solution" width="800" height="195"&gt;&lt;/a&gt;&lt;br&gt;
This displays a RBACAccessDenied error, but this screenshot:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8gulealr7ek3wwz8z5g4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8gulealr7ek3wwz8z5g4.png" alt="ownership" width="800" height="354"&gt;&lt;/a&gt; &lt;br&gt;
confirms ownership of the subscription.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4mxwhqxheubylqmzxym.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4mxwhqxheubylqmzxym.png" alt="invalid budget config" width="800" height="296"&gt;&lt;/a&gt;&lt;br&gt;
This above screenshot shows Invalid budget configuration.&lt;/p&gt;

&lt;p&gt;The CLI keeps failing with different error types.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhxkd0pxlzf9guoi0r9f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhxkd0pxlzf9guoi0r9f.png" alt="Invalid budget config" width="800" height="398"&gt;&lt;/a&gt;&lt;br&gt;
I ran into another Invalid budget configuration error.&lt;/p&gt;

&lt;p&gt;I confirmed the subscription is active, enabled:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyouaaav0kte93acwy0o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyouaaav0kte93acwy0o.png" alt="enabled sub" width="800" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3vcvseoe1k44n4tovly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3vcvseoe1k44n4tovly.png" alt="active" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I ran another command which finally confirms the "root cause" 100%.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfd80jrrgxh8vjcisnr8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfd80jrrgxh8vjcisnr8.png" alt="confirmation" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From your output:&lt;br&gt;
&lt;strong&gt;"quotaId": "FreeTrial_2014-09-01",&lt;br&gt;
"spendingLimit": "On"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This means am using a Free Trial subscription with spending limit ON.&lt;br&gt;
Why the budget creation keeps failing:&lt;br&gt;
Azure does NOT allow budget creation via CLI for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Free Trial&lt;/strong&gt; subscriptions&lt;/li&gt;
&lt;li&gt;Subscriptions with spending &lt;strong&gt;limit ON&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s why I keep getting invalid budget configuration and &lt;br&gt;
RBACAccessDenied (misleading error).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Important insight&lt;/em&gt;&lt;br&gt;
Since I already HAVE a built-in spending cap, it automatically shuts down when credits finish.&lt;br&gt;
So Azure assumes “You don’t need a budget — we already limit your spending.”&lt;/p&gt;

&lt;p&gt;These are the available options: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OPTION 1 — Use Azure Portal to create it manually (works sometimes) when CLI/API is blocked.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Go to:&lt;br&gt;
Cost Management → Budgets → + Add&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OPTION 2 — Upgrade subscription &lt;strong&gt;(guaranteed solution)&lt;/strong&gt;
Click “Upgrade” at the top of your portal and this will remove spending limit, convert to Pay-As-You-Go.
This allows budgets and alerts, with Full CLI support.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The ONLY blocker is the Free Trial restriction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommendation&lt;/strong&gt;: For learning (especially Azure CLI labs), Upgrade the subscription otherwise, you’ll keep hitting hidden limitations like this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:Check current resource group costs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This creates Log Analytics workspace to ingest usage metrics and performance logs later on.&lt;/p&gt;

&lt;p&gt;Why It's Needed&lt;br&gt;
Provides unified overview. Essential monitoring dependency for true Production Readiness.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Operational Excellence — creating the hub for telemetry.&lt;br&gt;
terminal&lt;br&gt;
Run:&lt;br&gt;
&lt;strong&gt;az monitor log-analytics workspace create &lt;br&gt;
  --resource-group azurecli-lab-rg &lt;br&gt;
  --workspace-name lab-logs &lt;br&gt;
  --location koreacentral&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm3aj73pl8qvkrbz6101.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm3aj73pl8qvkrbz6101.png" alt="resourcegrpcost" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjk6dwq16abtsm2tzuin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjk6dwq16abtsm2tzuin.png" alt="rgc" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Clean Up &amp;amp; Document Your Work
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Delete the resource group (and everything in it)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It removes the resource group and triggers the recursive cascading wipe of all associated child network and data structures attached within it.&lt;/p&gt;

&lt;p&gt;Why it's Needed&lt;br&gt;
The ultimate power move of organized Resource tiering and management. &lt;em&gt;Cloud instances incur hourly charges, immediate destruction preserves free-tiers&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
&lt;strong&gt;Cost Optimization&lt;/strong&gt; — decommission what you no longer use.&lt;br&gt;
terminal&lt;br&gt;
Run:&lt;br&gt;
&lt;strong&gt;az group delete --name $azurecli-lab-rg --yes --no-wait&lt;br&gt;
az group list --output table&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The fist line of the command deletes ALL resources in the group — VM, VNet, Storage, Key Vault. While the second line&lt;br&gt;
verifies deletion. (wait a few minutes, then check)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbg1d7xlyb0yoghsydtbh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbg1d7xlyb0yoghsydtbh.png" alt="delete" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:Create a project folder and write a README&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It scaffolds standard markdown files documenting everything accomplished here.&lt;/p&gt;

&lt;p&gt;Why It's Needed&lt;br&gt;
It ensures recruiters see exactly what was executed instead of an empty claim regarding Cloud expertise.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Operational Excellence — automated documentation.&lt;br&gt;
Run the following commands:&lt;br&gt;
&lt;strong&gt;mkdir azure-cli-lab; cd azure-cli-lab&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;This creates a new directory (folder)&lt;/em&gt;&lt;br&gt;
(The semicolon (;) is the valid statement separator in my Powershell version. It does the exact same thing, that is, it tells the computer to finish the first task and then start the second one.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;git init&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Initializes a new Git repository in the current directory&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4cmxpe0fx74iyigkwoc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4cmxpe0fx74iyigkwoc.png" alt="git" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To display the content of a file in the terminal, run this command:&lt;/em&gt;&lt;br&gt;
**@'&lt;/p&gt;

&lt;h1&gt;
  
  
  Azure CLI Cloud Lab
&lt;/h1&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;A complete Azure environment using only the Azure CLI — no portal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources Created
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Resource Group (azurecli-lab-rg in East US)&lt;/li&gt;
&lt;li&gt;Virtual Network (10.0.0.0/16) with Subnet (10.0.1.0/24)&lt;/li&gt;
&lt;li&gt;NSG with SSH (22) and HTTP (80) rules&lt;/li&gt;
&lt;li&gt;Ubuntu VM (Standard_B1s) with Nginx installed&lt;/li&gt;
&lt;li&gt;Storage Account with blob container&lt;/li&gt;
&lt;li&gt;Key Vault with secret &amp;amp; managed identity&lt;/li&gt;
&lt;li&gt;Cost Budget at $10/month with 80% alert&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Commands
&lt;/h2&gt;

&lt;p&gt;az group create, az vm create, az network vnet create, az storage account create, az keyvault create&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;How to provision a full Azure environment from the CLI&lt;/li&gt;
&lt;li&gt;VNet + NSG = the network security foundation&lt;/li&gt;
&lt;li&gt;Key Vault + Managed Identity = zero-credential secret management&lt;/li&gt;
&lt;li&gt;Always delete resources after a lab to avoid charges
'@ | Set-Content -Path "README.md"**&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flv3ihm8kbb743yw53u7g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flv3ihm8kbb743yw53u7g.png" alt="readme" width="800" height="500"&gt;&lt;/a&gt;.&lt;br&gt;
&lt;em&gt;The error you see in my screenshot resulted when I used a bash command instead of PowerShell.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now that I've "built" the file, let's "see" it. Run this command to read it back in your terminal:&lt;br&gt;
&lt;strong&gt;Get-Content README.md&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkal7992070rcn01cn6fl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkal7992070rcn01cn6fl.png" alt="readme" width="800" height="447"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Notice the README.md is in the azure-cli-lab file&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:Commit and push to GitHub&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This pushes your locally created lab notes to an external hosted tracking service.&lt;/p&gt;

&lt;p&gt;Why It's Needed&lt;br&gt;
A standard CI workflow for real-world projects and portfolio sharing.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Operational Excellence — tracking version history in remote repos.&lt;br&gt;
Run the following commands:&lt;br&gt;
&lt;strong&gt;git add .&lt;/strong&gt;&lt;br&gt;
Stages all changed files for the next commit.&lt;br&gt;
If you want to be specific, you can name the file:git add README.md&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;git commit -m 'docs: Azure CLI cloud lab — full environment from scratch'&lt;/strong&gt;&lt;br&gt;
Creates a new commit with all staged changes and the message after -m&lt;br&gt;
&lt;strong&gt;git branch -M main&lt;/strong&gt;&lt;br&gt;
Lists, creates, or manages branches.&lt;br&gt;
&lt;strong&gt;git remote add origin &lt;a href="https://github.com/" rel="noopener noreferrer"&gt;https://github.com/&lt;/a&gt;/azure-cli-lab.git&lt;/strong&gt;&lt;br&gt;
Stages the specified file(s) for the next commit&lt;br&gt;
&lt;strong&gt;git push -u origin main&lt;/strong&gt;&lt;br&gt;
Uploads your local commits to the remote repository.&lt;br&gt;
The Goal is actually sending the box to the cloud.&lt;/p&gt;

&lt;p&gt;The -u "links" your local folder to the GitHub folder forever, so next time you only have to type &lt;code&gt;git push&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fype6osm4d1s3hzqx81kc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fype6osm4d1s3hzqx81kc.png" alt="gitopens" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4w9fwlgwc55emnqcl7r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4w9fwlgwc55emnqcl7r.png" alt="git" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uwk5o5u2kvo1v7pzs5u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uwk5o5u2kvo1v7pzs5u.png" alt="verificationcode" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7sasrwyhslncrnpt0qm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7sasrwyhslncrnpt0qm.png" alt="complete" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to make sure there is a "landing pad" waiting for your code on the internet.&lt;br&gt;
Think of it like this - your terminal knows what to send, but GitHub doesn't know where to put it yet.&lt;br&gt;
&lt;strong&gt;Step 1: Create the "Landing Pad" (GitHub Website)&lt;/strong&gt;&lt;br&gt;
Before running the next command, you need to do this manually in your browser:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to github.com.&lt;/li&gt;
&lt;li&gt;Click the + icon in the top right and select &lt;strong&gt;New repository&lt;/strong&gt;.
Name it exactly &lt;strong&gt;azure-cli-lab&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: Do not check "Initialize this repository with a README" (because we already created one in your terminal).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fij1s5xq1tvepjvl39ntn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fij1s5xq1tvepjvl39ntn.png" alt="repo" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scroll to the page bottom and Click &lt;strong&gt;Create repository&lt;/strong&gt;.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fanmogubo3r7906frnu3m.png" alt="createrepo" width="800" height="528"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2: The "Git Log" Check (Terminal)&lt;/strong&gt;&lt;br&gt;
While setting that up, let's verify that the current branch 'main' already has commits, by running this command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;git log --oneline&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80hjaa0we45cqo9gw795.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80hjaa0we45cqo9gw795.png" alt="boxispacked" width="800" height="454"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 3:Connect and Push&lt;/strong&gt;&lt;br&gt;
Once the GitHub repository is created on the website,I'll run these two final commands to finish the lab:&lt;/p&gt;

&lt;h1&gt;
  
  
  Connect your computer to the web address (Replace YOUR_USERNAME)
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;git remote add origin &lt;a href="https://github.com/rahimahisah17/azure-cli-lab.git" rel="noopener noreferrer"&gt;https://github.com/rahimahisah17/azure-cli-lab.git&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Upload the files
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;git push -u origin main&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftwnj4j1w77ueb1kz2bq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftwnj4j1w77ueb1kz2bq.png" alt="worked" width="800" height="274"&gt;&lt;/a&gt;&lt;br&gt;
The latest screenshot shows a total success. I've officially "pushed" your code from your local machine to the cloud. &lt;strong&gt;Seeing * [new branch] main -&amp;gt; main is the final green light for any developer.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Writing objects&lt;/em&gt;: 100% (3/3) means All 3 parts of your Git snapshot &lt;strong&gt;(the files, the folder info, and the message)&lt;/strong&gt; were uploaded.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;branch main&lt;/code&gt; set up to track 'origin/main' means your computer and GitHub are now "synced." Next time you change your README, you only have to type &lt;strong&gt;git push&lt;/strong&gt;, no extra settings needed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Correction to READme
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;I realized, I used East US, instead of Korea Central. I must also state that I failed to create the budget and state the reason.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Update the README.md File&lt;/strong&gt;&lt;br&gt;
This command uses a "Here-String" to overwrite your existing file with the new location (Korea Central) and the note about subscription limitations.&lt;br&gt;
Run this block:&lt;br&gt;
**@'&lt;/p&gt;

&lt;h1&gt;
  
  
  Azure CLI Cloud Lab
&lt;/h1&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;A complete Azure environment using only the Azure CLI — no portal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources Created
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Resource Group (azurecli-lab-rg in Korea Central)&lt;/li&gt;
&lt;li&gt;Virtual Network (10.0.0.0/16) with Subnet (10.0.1.0/24)&lt;/li&gt;
&lt;li&gt;NSG with SSH (22) and HTTP (80) rules&lt;/li&gt;
&lt;li&gt;Ubuntu VM (Standard_B1s) with Nginx installed&lt;/li&gt;
&lt;li&gt;Storage Account with blob container&lt;/li&gt;
&lt;li&gt;Key Vault with secret &amp;amp; managed identity&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;[!IMPORTANT]&lt;br&gt;
&lt;strong&gt;Cost Budget Note:&lt;/strong&gt; The $10 monthly budget failed to implement in this specific environment due to Azure subscription limitations (e.g., Free Tier or specific tenant restrictions).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Key Commands
&lt;/h2&gt;

&lt;p&gt;az group create, az vm create, az network vnet create, az storage account create, az keyvault create&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Regional differences: Migrated deployment to Korea Central.&lt;/li&gt;
&lt;li&gt;API Constraints: Budgeting tools are restricted on certain subscription types.&lt;/li&gt;
&lt;li&gt;Always delete resources after a lab to avoid charges.
'@ | Set-Content -Path "README.md"**&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yd0ozod1npdt5nzenu4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yd0ozod1npdt5nzenu4.png" alt="stage" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Update Your Resource Group Location&lt;/strong&gt;&lt;br&gt;
Since I decided to change the location to Korea Central, I ran this to update your Azure environment to match your new documentation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;az group update --name "azurecli-lab-rg" --set location="koreacentral"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozwuexa0raoaybj1n56m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozwuexa0raoaybj1n56m.png" alt="ameend" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Amend and Force Push to GitHub&lt;/strong&gt;&lt;br&gt;
Since I already pushed a version of this project, I will "amend" the previous commit so your history stays clean and professional.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;git add README.md&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Overwrite the last commit message&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;git commit --amend -m "docs: update location to Korea Central and note budget limit&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Force push to the cloud&lt;/strong&gt;&lt;br&gt;
(This is required because we are changing history that was already uploaded.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;git push origin main --force&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4500trdsbf9gob5xqjjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4500trdsbf9gob5xqjjc.png" alt="git" width="800" height="122"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this part of the project, I successfully extended my Azure CLI lab by implementing storage, security, and operational workflows. I created a storage account and container, uploaded and verified files, and explored two authentication approaches: RBAC and access keys. I also set up Azure Key Vault to &lt;strong&gt;securely store and retrieve secrets&lt;/strong&gt;, and configured a managed identity for secure, credential-free access.&lt;/p&gt;

&lt;p&gt;While attempting to implement &lt;strong&gt;cost monitoring&lt;/strong&gt;, I encountered Azure subscription limitations that prevented budget creation via CLI, this is an important real-world insight into how Free Trial subscriptions behave.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Overall, this phase reinforced key cloud principles of secure data handling, identity-based access, cost awareness, and environment cleanup. It also demonstrated that beyond just running commands, understanding Azure’s underlying constraints and design decisions is critical for building reliable, production-ready solutions.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>azurecli</category>
      <category>devops</category>
      <category>cloudcomputing</category>
      <category>azure</category>
    </item>
    <item>
      <title>Data-Driven Energy Insights: Analyzing National Fuel Markets with Power BI &amp; DAX</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Mon, 16 Mar 2026 17:35:13 +0000</pubDate>
      <link>https://forem.com/rahimah_dev/data-driven-energy-insights-analyzing-national-fuel-markets-with-power-bi-dax-3f1c</link>
      <guid>https://forem.com/rahimah_dev/data-driven-energy-insights-analyzing-national-fuel-markets-with-power-bi-dax-3f1c</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why Fuel Market Data Matters&lt;/strong&gt;&lt;br&gt;
In a global economy, understanding energy consumption and fuel distribution is more than just looking at numbers, it’s about identifying economic patterns and infrastructure needs. I recently completed a deep-dive analysis into the National Fuel Market in Argentina, transforming raw datasets into an interactive, actionable intelligence report.&lt;/p&gt;

&lt;p&gt;This project wasn't just about visualization, it was about building a robust data model that could handle complex regional variables and provide clear insights for stakeholders.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Technical Challenge
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;From Raw Data to Insights&lt;/strong&gt;&lt;br&gt;
Every data project starts with a hurdle. For this analysis, the focus was on ensuring data integrity across various provinces and fuel types.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Data Architecture &amp;amp; Modeling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I implemented a &lt;code&gt;Star Schema&lt;/code&gt; to ensure the report remained performant despite the dataset's size. By separating fact tables (sales and prices) from dimension tables (geography, time, and fuel categories), I ensured that the report remains scalable for future data updates. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdu89s37jzv9k6bddf8f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdu89s37jzv9k6bddf8f.png" alt="model" width="800" height="607"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Advanced Analytics with DAX&lt;/strong&gt;&lt;br&gt;
To go beyond basic arithmetic, I utilized DAX (Data Analysis Expressions) to create dynamic measures. These allowed for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Year-over-Year (YoY) Growth: Tracking how consumption shifted across different quarters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Regional Market Share: Identifying which provinces dominated specific fuel categories.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Price Volatility Tracking: Visualizing how price fluctuations impacted sales volume.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3.I customized a wireframe using Powerpoint&lt;/strong&gt;.&lt;br&gt;
Into the wireframe, I inserted my shapes and resized and added fill colors of my choice. I imported the icons from &lt;code&gt;Flaticons&lt;/code&gt;. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpzere20nfyvpmwaihr7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpzere20nfyvpmwaihr7.png" alt="ppt" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.I used a high-end UI/UX design&lt;/strong&gt; &lt;br&gt;
This demonstrated the effective use buttons and bookmarks to enhance interactivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Insights Discovered
&lt;/h2&gt;

&lt;p&gt;The data revealed several compelling trends that would be vital for any policy-maker or private stakeholder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3668p94ptlqhfs0co02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3668p94ptlqhfs0co02.png" alt="Images" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnt1fdhapwcp3n5zwifd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnt1fdhapwcp3n5zwifd.png" alt="Images" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Regional Concentration&lt;/strong&gt;: Highlighting specific hubs where infrastructure investment would yield the highest ROI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consumption Shifts&lt;/strong&gt;: Identifying the transition points between traditional fuels and emerging alternatives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Market Resilience&lt;/strong&gt;: How various regions reacted to pricing shifts over the analyzed period.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Interactive Experience
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Static reports only tell half the story. To truly explore the data, I’ve published the full interactive version of the dashboard&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.novypro.com/profile_about/1770707382971x906585397690459400?Popup=memberProject&amp;amp;Data=1772657215286x758082894723003800" rel="noopener noreferrer"&gt;View the &lt;strong&gt;Interactive&lt;/strong&gt; Report on NovyPro&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/rahimahisah17/National-Fuel-Market-Analysis" rel="noopener noreferrer"&gt;Explore the Full Technical Repository on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Turning Data into Strategy&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;As a Data Analyst, my goal is always to bridge the gap between technical complexity and business strategy. This project reinforced the importance of clean modeling and the power of interactive storytelling in data&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>businessintelligence</category>
      <category>energysector</category>
      <category>portfolioproject</category>
      <category>dataanalysis</category>
    </item>
    <item>
      <title>Creating Azure Resources via Azure CLI: Part 2</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Thu, 12 Mar 2026 15:52:12 +0000</pubDate>
      <link>https://forem.com/rahimah_dev/creating-azure-resources-via-azure-cli-part-2-10m9</link>
      <guid>https://forem.com/rahimah_dev/creating-azure-resources-via-azure-cli-part-2-10m9</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Adaptability is the core of DevOps. After navigating subscription-level constraints in our previous VM setup, it became clear that understanding 'where' and 'how' you deploy is just as vital as the 'what.' &lt;br&gt;
In this second part of our Azure CLI series, we apply those hard-won lessons to build a faster, more agile deployment workflow. From &lt;strong&gt;optimizing&lt;/strong&gt; locations to &lt;strong&gt;selecting&lt;/strong&gt; the right VM sizes on the fly, this guide provides a professional blueprint for mastering Azure resources through the command line.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Here, I skipped Install Azure CLI and headed straight to verification&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Verify the installation&lt;/strong&gt;.&lt;br&gt;
To verify the installation, run the Azure CLI command: &lt;strong&gt;az --version&lt;/strong&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e3jqap1vmqe4a5k9h79.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e3jqap1vmqe4a5k9h79.png" alt="version" width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This gives details of the Azure version in use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Login&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run the Azure command &lt;strong&gt;az login&lt;/strong&gt;. This opens a browser for interactive authentication and sets your active Azure subscription context.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76i5zuys0fcdl8lg8hf4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76i5zuys0fcdl8lg8hf4.png" alt="login" width="800" height="468"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Notice&lt;/em&gt; I selected my account and then &lt;strong&gt;Continue&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwbfdpqc34gl3uedxk40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwbfdpqc34gl3uedxk40.png" alt="login" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note I did not have to sign in because the account was set in the previous exercise, using the &lt;strong&gt;az account set&lt;/strong&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;: To confirm my active subscription, I entered 1, because it is the only active subscription I have. If you have more than one subscription, enter the number that corresponds to the subscription you intend to use.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5vemrkoq38s4yercu1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5vemrkoq38s4yercu1x.png" alt="setacct" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;We can go ahead and provision the Resource Group now&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Resource Group
&lt;/h2&gt;

&lt;p&gt;In this section, we aim to create a resource group to act as the logical container for the entire lab environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: &lt;strong&gt;Set a variable for the resource group&lt;/strong&gt;. &lt;br&gt;
Store the &lt;strong&gt;resource group name&lt;/strong&gt; and &lt;strong&gt;region&lt;/strong&gt; in powershell variables. This is highly recommended to prevent typos throughout the rest of the lab and make the script easily reusable.&lt;/p&gt;

&lt;p&gt;RG="azurecli-lab-rg"&lt;br&gt;
This sets the shell variable RG so later commands can reference it with &lt;strong&gt;$RG&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;LOCATION="koreacentral"&lt;br&gt;
This sets the shell variable LOCATION so later commands can reference it with &lt;strong&gt;$LOCATION&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: &lt;strong&gt;Create the Resource Group&lt;/strong&gt;&lt;br&gt;
Create a named resource group in &lt;strong&gt;korea central&lt;/strong&gt; this time around. All resources in this lab will be placed here for easy cleanup.&lt;/p&gt;

&lt;p&gt;This is needed because Azure requires every resource to live inside a resource group. &lt;em&gt;They make it easy to manage, monitor, and delete everything together at the end of the lab&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Operational Excellence — grouping related resources together is a best practice for &lt;strong&gt;manageability&lt;/strong&gt; and &lt;strong&gt;cost tracking&lt;/strong&gt;.&lt;br&gt;
Run the command: &lt;strong&gt;az group create --name $RG --location $LOCATION&lt;/strong&gt;.&lt;br&gt;
&lt;em&gt;This creates a resource group called "$RG" — a logical container for all the Azure resources in this lab&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7n1rpfv3xjjqgse604a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7n1rpfv3xjjqgse604a.png" alt="created" width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clearly, it was created because the properties are stated and &lt;em&gt;Notice&lt;/em&gt; the &lt;strong&gt;ProvisioningState&lt;/strong&gt; reads &lt;strong&gt;"Succeeded"&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build a Virtual Network (VNet) &amp;amp; Subnet
&lt;/h2&gt;

&lt;p&gt;This is aimed at creating a secure private network for your Azure resources to communicate on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: &lt;strong&gt;Create the Virtual Network&lt;/strong&gt;&lt;br&gt;
Here, you will create a &lt;strong&gt;Virtual Network&lt;/strong&gt; with a broad 10.0.0.0/16 IP address space.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is necessary because VMs and other infrastructure need a secure, isolated private network to communicate with each other.&lt;br&gt;
Creating an isolated network boundary is the foundational step of cloud security&lt;/em&gt;.&lt;br&gt;
Run the command: &lt;strong&gt;az network vnet create --address-prefix 10.0.0.0/16 --resource-group $RG --name lab-vnet --location $LOCATION&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87x6iyqeksyn0dx9md1r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87x6iyqeksyn0dx9md1r.png" alt="vnet" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;It took just 4 seconds for this virtual network to provision. That's amazing!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: &lt;strong&gt;Create a Subnet&lt;/strong&gt;&lt;br&gt;
This would carve out a &lt;strong&gt;smaller 10.0.1.0/24 piece (subnet)&lt;/strong&gt; of the VNet specifically for your VMs.&lt;/p&gt;

&lt;p&gt;Why this is relevant&lt;br&gt;
&lt;em&gt;Segmenting networks allows you to apply different routing and firewall rules to different types of resources&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This aspect of security is known as network segmentation.&lt;br&gt;
Run the CLI command: &lt;strong&gt;az network vnet subnet create --resource-group $RG --vnet-name lab-vnet --name lab-subnet --address-prefix 10.0.1.0/24&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6oad23duj9vs5p6mh9v2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6oad23duj9vs5p6mh9v2.png" alt="subnet" width="800" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;: &lt;strong&gt;Create a Network Security Group (NSG)&lt;/strong&gt;&lt;br&gt;
A Network Security Group acts as a virtual firewall.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is important because without an NSG attached, Microsoft allows no inbound traffic but allows all outbound traffic. We need an NSG to poke specific holes in the firewall.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Controlling traffic flow with firewalls is a basic security requirement&lt;/strong&gt;.&lt;br&gt;
Run the command: &lt;strong&gt;az network nsg create --resource-group $RG --name lab-nsg&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpbssz5rvs8yilpb3xlt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpbssz5rvs8yilpb3xlt.png" alt="nsg" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt;: &lt;strong&gt;Open port 22 (SSH) &amp;amp; 80 (HTTP)&lt;/strong&gt;&lt;br&gt;
Let's add inbound rules prioritizing SSH (port 22) and HTTP (port 80) access from the internet.&lt;br&gt;
The reason for this action is that you'll need SSH to log in and configure the server, and HTTP so users can view the web page.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This explicitly defining inbound access using the principle of least privilege&lt;/em&gt;.&lt;br&gt;
&lt;strong&gt;First&lt;/strong&gt; ensure your terminal knows what $RG is by running this line first: &lt;strong&gt;$RG="azurecli-lab-rg"&lt;/strong&gt;&lt;br&gt;
Then &lt;strong&gt;create the SSH rule&lt;/strong&gt; using this command:&lt;br&gt;
&lt;strong&gt;az network nsg rule create &lt;code&gt;&lt;br&gt;
  --resource-group $RG&lt;/code&gt;&lt;br&gt;
  --nsg-name lab-nsg &lt;code&gt;&lt;br&gt;
  --name AllowSSH&lt;/code&gt;&lt;br&gt;
  --priority 1000 &lt;code&gt;&lt;br&gt;
  --destination-port-ranges 22&lt;/code&gt;&lt;br&gt;
  --access Allow &lt;code&gt;&lt;br&gt;
  --protocol Tcp&lt;/code&gt;&lt;br&gt;
  --direction Inbound&lt;/strong&gt;&lt;br&gt;
Wait for the first command to finish. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxe02vlm2qnweemvt8ua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxe02vlm2qnweemvt8ua.png" alt="ssh" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then &lt;strong&gt;create the HTTP Rule&lt;/strong&gt; by running this separate block:&lt;br&gt;
&lt;strong&gt;az network nsg rule create &lt;code&gt;&lt;br&gt;
  --resource-group $RG&lt;/code&gt;&lt;br&gt;
  --nsg-name lab-nsg &lt;code&gt;&lt;br&gt;
  --name AllowHTTP&lt;/code&gt;&lt;br&gt;
  --priority 1010 &lt;code&gt;&lt;br&gt;
  --destination-port-ranges 80&lt;/code&gt;&lt;br&gt;
  --access Allow &lt;code&gt;&lt;br&gt;
  --protocol Tcp&lt;/code&gt;&lt;br&gt;
  --direction Inbound&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Notice (`)serves as a line breaker because the command is long. Leave one space before the "backtick" and then enter&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5vgv83o5pw4xyzywljc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5vgv83o5pw4xyzywljc.png" alt="http" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5&lt;/strong&gt;:&lt;strong&gt;Attach NSG to Subnet&lt;/strong&gt;&lt;br&gt;
This enforces the firewall rules (NSG) at the subnet boundary.&lt;/p&gt;

&lt;p&gt;It's needed because by applying the NSG to the subnet ensures that &lt;strong&gt;any VM&lt;/strong&gt; created in that subnet automatically inherits those exact firewall rules, thereby &lt;strong&gt;protecting the entire subnet&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Security — subnet-level application of security controls.&lt;br&gt;
Run this block of commands:&lt;br&gt;
&lt;strong&gt;az network vnet subnet update &lt;code&gt;&lt;br&gt;
  --resource-group $RG &lt;/code&gt;&lt;br&gt;
  --vnet-name lab-vnet &lt;code&gt;&lt;br&gt;
  --name lab-subnet &lt;/code&gt;&lt;br&gt;
  --network-security-group lab-nsg&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Remember to run the resource group variable first if you restart the terminal&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuelp57knhmu7ftlaa3s6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuelp57knhmu7ftlaa3s6.png" alt="nsg" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Provision a Linux Virtual Machine
&lt;/h2&gt;

&lt;p&gt;Here, you will create an Ubuntu VM with a public IP inside your VNet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: &lt;strong&gt;Allocate a Public IP&lt;/strong&gt;&lt;br&gt;
This allocates a static public IP address in Azure.&lt;/p&gt;

&lt;p&gt;It's needed because without a public IP, the VM can only be accessed internally through the VNet or a VPN. You need this to reach your web server from your browser.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Reliability — using a Static IP ensures the address does not change upon reboot.&lt;/em&gt;&lt;br&gt;
Run this command: &lt;br&gt;
&lt;strong&gt;az network public-ip create &lt;code&gt;&lt;br&gt;
  --resource-group $RG &lt;/code&gt;&lt;br&gt;
  --name lab-public-ip &lt;code&gt;&lt;br&gt;
  --allocation-method Static &lt;/code&gt;&lt;br&gt;
  --sku Basic&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwg77ck2ifoffq46lkka1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwg77ck2ifoffq46lkka1.png" alt="nsg" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Notice due to error, I had to switch to "standard". Pay attention to error messages&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkq7nhxn0sv72gijqqibc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkq7nhxn0sv72gijqqibc.png" alt="nsg" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: &lt;strong&gt;Create the VM&lt;/strong&gt;&lt;br&gt;
This creates a B1s Ubuntu VM with auto-generated SSH keys and connects it to the existing subnet and firewall.&lt;/p&gt;

&lt;p&gt;It's necessary because this is the actual cloud compute instance that will run your web application code.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Performance Efficiency — selecting the appropriately sized VM for your workload (B1s for dev/test)&lt;/em&gt;.&lt;br&gt;
Run this command: &lt;strong&gt;az vm create &lt;code&gt;&lt;br&gt;
--resource-group azurecli-lab-rg &lt;/code&gt;&lt;br&gt;
--name lab-vm &lt;code&gt;&lt;br&gt;
--image Ubuntu2204 &lt;/code&gt;&lt;br&gt;
--size Standard_B2s_v2 &lt;code&gt;&lt;br&gt;
--location koreacentral &lt;/code&gt;&lt;br&gt;
--admin-username azureuser &lt;code&gt;&lt;br&gt;
--generate-ssh-keys &lt;/code&gt;&lt;br&gt;
--vnet-name lab-vnet &lt;code&gt;&lt;br&gt;
--subnet lab-subnet &lt;/code&gt;&lt;br&gt;
--public-ip-address lab-public-ip&lt;br&gt;
--nsg lab-nsg&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sqm5r1di3wu0e6ipxj2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sqm5r1di3wu0e6ipxj2.png" alt="vm" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Notice this time around, I did not have to change anything because am now aware of the limitations that come with the subscription and the &lt;strong&gt;Powerstate reads running&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;: &lt;strong&gt;Retrieve the public IP&lt;/strong&gt;&lt;br&gt;
This will filter the Azure API response to return just the IP address string.&lt;/p&gt;

&lt;p&gt;It's important because you'll need this IP to SSH into the machine and to test the web application.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Operational Excellence — automated retrieval of resource attributes avoids manual portal lookups&lt;/em&gt;.&lt;br&gt;
Run the command: &lt;strong&gt;az network public-ip show &lt;code&gt;&lt;br&gt;
--resource-group azurecli-lab-rg &lt;/code&gt;&lt;br&gt;
--name lab-public-ip &lt;code&gt;&lt;br&gt;
--query ipAddress &lt;/code&gt;&lt;br&gt;
--output tsv&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubbpgzkwd031tx03dcmb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubbpgzkwd031tx03dcmb.png" alt="Ip" width="800" height="451"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Notice the error occurred because I mistakenly omitted the "-rg" suffix&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt;: &lt;strong&gt;Verify the VM is running&lt;/strong&gt;&lt;br&gt;
This queries the VM status and displays it in a clean table format.&lt;/p&gt;

&lt;p&gt;It's needed because you always verify provisioning success before attempting connections.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Operational Excellence — verification and monitoring&lt;/em&gt;.&lt;br&gt;
Run the command: &lt;strong&gt;az vm show &lt;code&gt;&lt;br&gt;
--resource-group azurecli-lab-rg &lt;/code&gt;&lt;br&gt;
--name lab-vm &lt;code&gt;&lt;br&gt;
--show-details&lt;/code&gt;&lt;br&gt;
--query '{Name:name, State:powerState, IP:publicIps}'&lt;br&gt;
--output table&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsa9bj99nrxmu00sy5a1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsa9bj99nrxmu00sy5a1.png" alt="running" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5&lt;/strong&gt;:&lt;strong&gt;SSH into your VM &amp;amp; install Nginx&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It logs into the VM over the internet via SSH, installs the Nginx package using APT, and starts the service.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Why It's Needed&lt;/em&gt;&lt;br&gt;
A fresh VM is blank. Nginx serves as the web server to test our HTTP port 80 firewall rule.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Pillar Connection&lt;/em&gt;&lt;br&gt;
Operational Excellence — bootstrap scripts or userdata are typically used to automate this step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5kjebxyw491umseykci.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5kjebxyw491umseykci.png" alt="nginx" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F34k8w71ec4t5tq6zzlh9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F34k8w71ec4t5tq6zzlh9.png" alt="ubuntu" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr07lgx3s199rqd9cuj8r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr07lgx3s199rqd9cuj8r.png" alt="ubuntu" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjhb1j55xbxqf4o076rk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjhb1j55xbxqf4o076rk.png" alt="ubuntu" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fic4bvpmj5wa46cl0ac10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fic4bvpmj5wa46cl0ac10.png" alt="ubuntu" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can go ahead to verify the resources' provision in your Azure Portal. This will help you appreciate resource creation via the Azure CLI. It's very fast and efficient&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Your Turn to Command the Cloud
&lt;/h2&gt;

&lt;p&gt;Stepping out of the comfort zone of the Azure Portal and into the CLI is more than just a technical shift, it’s a mindset shift. By following this guide, you’ve moved from being a "user" of the cloud to someone who truly "architects" it.&lt;/p&gt;

&lt;p&gt;Don't be discouraged if you hit errors along the way. Every "SkuNotAvailable" or "InvalidParameter" is just a signal that you're learning how the machine actually thinks. The more you practice, the more these commands will feel like a second language.&lt;/p&gt;

&lt;p&gt;I’d love to hear from you! Let's go to the comment section👇👇&lt;/p&gt;

</description>
      <category>cloudcomputing</category>
      <category>infrastructure</category>
      <category>azurecli</category>
      <category>automation</category>
    </item>
    <item>
      <title>Microsoft Azure Management Tasks: Manage Tags and Locks</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Thu, 12 Mar 2026 10:57:23 +0000</pubDate>
      <link>https://forem.com/rahimah_dev/microsoft-azure-management-tasks-manage-tags-and-locks-475c</link>
      <guid>https://forem.com/rahimah_dev/microsoft-azure-management-tasks-manage-tags-and-locks-475c</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In a fast-paced cloud environment, visibility and protection are the difference between seamless operations and costly downtime. As a cloud-focused professional, I understand that managing resources isn't just about deployment,it's about governance. &lt;br&gt;
&lt;em&gt;This project demonstrates my ability to implement Azure Resource Governance by applying strategic metadata via tags for cost center tracking and enforcing Resource Locks to prevent accidental deletions of mission-critical infrastructure&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;If you’ve completed the previous exercises(in the Microsoft Azure Management Tasks), you’ve managed added a subnet to a virtual network, made changes to a virtual machine, and worked with an Azure storage account. The final set of tasks for this guided project focus around working with &lt;strong&gt;tags&lt;/strong&gt; and &lt;strong&gt;resource locks&lt;/strong&gt; to help manage and monitor your environment. During this exercise you’ll go back into each of the areas you’ve already worked to add tags, locks, or a combination of both.&lt;/p&gt;

&lt;p&gt;This exercise should take approximately &lt;strong&gt;5&lt;/strong&gt; minutes to complete.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;Pleased with your progress so far, the Azure admin hopes that you can wrap a few things up to help with monitoring and protecting resources. They want to know that someone can’t accidentally get rid of the virtual machine that’s running as an FTP server, and they want a quick way to see what department is using resources and the resource’s purpose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Manage tags and locks on VMs
&lt;/h2&gt;

&lt;p&gt;Adding tags to resources is a quick way to be able to group and organize resources. Tags can be added at different levels, giving you the ability to organize and group resources at a level that makes sense for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add tags to a virtual machine
&lt;/h2&gt;

&lt;p&gt;You’ll start by adding a pair of tags to the virtual machine. One tag will be to identify the purpose of the virtual machine and the other will be to indicate the department the machine supports.&lt;/p&gt;

&lt;p&gt;1.Login to Microsoft Azure at &lt;a href="https://portal.azure.com" rel="noopener noreferrer"&gt;https://portal.azure.com&lt;/a&gt;&lt;br&gt;
2.From the Azure portal home page, in the search box, enter &lt;strong&gt;virtual machines&lt;/strong&gt;.&lt;br&gt;
3.Select &lt;strong&gt;virtual machines&lt;/strong&gt; under services.&lt;br&gt;
4.Select the &lt;strong&gt;guided-project-vm&lt;/strong&gt; virtual machine.&lt;br&gt;
5.From the menu pane, select &lt;strong&gt;Tags&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4j6mirngbc9o3h5a7nw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4j6mirngbc9o3h5a7nw.png" alt="tags" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.On one line for &lt;strong&gt;Name&lt;/strong&gt; enter &lt;code&gt;Department&lt;/code&gt; and for &lt;strong&gt;Value&lt;/strong&gt; enter &lt;code&gt;Customer Service&lt;/code&gt;&lt;br&gt;
7.On the next line, for &lt;strong&gt;Name&lt;/strong&gt; enter &lt;code&gt;Purpose&lt;/code&gt; and for &lt;strong&gt;Value&lt;/strong&gt; enter &lt;code&gt;FTP Server&lt;/code&gt;.&lt;br&gt;
8.Select &lt;strong&gt;Apply&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7igsbv3w2r766pgoe64s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7igsbv3w2r766pgoe64s.png" alt="apply" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While you’re working on the virtual machine, it’s a great time to add a resource lock.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add a resource lock to a VM
&lt;/h2&gt;

&lt;p&gt;1.If necessary, expand the &lt;strong&gt;Settings&lt;/strong&gt; submenu.&lt;br&gt;
2.Select &lt;strong&gt;Locks&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe68tbe2iht3bt88seotd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe68tbe2iht3bt88seotd.png" alt="locks" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.Select &lt;strong&gt;+ Add&lt;/strong&gt;.&lt;br&gt;
4.For the name, enter &lt;code&gt;VM-delete-lock&lt;/code&gt;.&lt;br&gt;
5.For the &lt;strong&gt;Lock type&lt;/strong&gt;, select &lt;strong&gt;Delete&lt;/strong&gt;.&lt;br&gt;
6.You may enter a note to help remind you why you created the lock.&lt;br&gt;
7.Select &lt;strong&gt;OK&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0ue2dgb0nm6aac6oakj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0ue2dgb0nm6aac6oakj.png" alt="deletelock" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s it. Now the VM is protected from deletion and has tags assigned to help track use. Time to move onto the network.&lt;/p&gt;

&lt;p&gt;1.Select &lt;strong&gt;Home&lt;/strong&gt; to return to the Azure portal home page.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add tags to network resources
&lt;/h2&gt;

&lt;p&gt;1.From the Azure portal home page, in the search box, enter &lt;strong&gt;virtual networks&lt;/strong&gt;.&lt;br&gt;
2.Select &lt;strong&gt;virtual networks&lt;/strong&gt; under services.&lt;br&gt;
3.Select the &lt;strong&gt;guided-project-vnet&lt;/strong&gt; network.&lt;br&gt;
4.From the menu pane, select &lt;strong&gt;Tags&lt;/strong&gt;.&lt;br&gt;
&lt;strong&gt;Note&lt;/strong&gt;: Notice that now you can select an existing tag to apply or add a new tag. You can also select just the name or value and apply create something new in the other field.&lt;/p&gt;

&lt;p&gt;5.For the &lt;strong&gt;Name&lt;/strong&gt; select &lt;strong&gt;Department&lt;/strong&gt;.&lt;br&gt;
6.For the &lt;strong&gt;Value&lt;/strong&gt; enter &lt;code&gt;IT&lt;/code&gt;.&lt;br&gt;
7.Select &lt;strong&gt;Apply&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fted73iuopdze9tyodz6i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fted73iuopdze9tyodz6i.png" alt="vnet tags" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now both the VNet and VM have been organized.&lt;/p&gt;

&lt;p&gt;Congratulations! You’ve completed this exercise. .&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By successfully implementing tags and resource locks, I have ensured that the environment is not only organized for departmental billing and monitoring but also hardened against human error. &lt;br&gt;
&lt;em&gt;These foundational governance tasks are essential for maintaining a secure, scalable, and professional Azure footprint&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>azureadmin</category>
      <category>infrastructure</category>
      <category>resourcelocks</category>
    </item>
    <item>
      <title>Microsoft Azure Management Tasks: Control storage access</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Thu, 12 Mar 2026 10:43:14 +0000</pubDate>
      <link>https://forem.com/rahimah_dev/microsoft-azure-management-tasks-control-storage-access-566k</link>
      <guid>https://forem.com/rahimah_dev/microsoft-azure-management-tasks-control-storage-access-566k</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In modern cloud environments, storing files is only part of the job, controlling who can access them and how they are accessed is just as critical. Organizations rely heavily on secure and scalable storage solutions to share data across teams, applications, and services. &lt;br&gt;
&lt;em&gt;In &lt;strong&gt;Microsoft Azure&lt;/strong&gt;, storage accounts, containers, and file shares provide powerful ways to manage data while maintaining strict control over access&lt;/em&gt;. &lt;br&gt;
In this exercise, you’ll explore how to create storage containers and file shares, upload files, manage access tiers, and securely control access using shared access signatures.In this exercise, you’ll complete several tasks related to managing a storage account and components of the storage account.&lt;/p&gt;

&lt;p&gt;This exercise should take approximately &lt;strong&gt;12&lt;/strong&gt; minutes to complete.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;The Azure admin wants you to get more familiar with storage accounts, containers, and file shares. They anticipate needing to share an increasing number of files and need someone who is skilled using these services. They’ve given you a task of creating a storage container and a file share and uploading files to both locations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a storage container
&lt;/h2&gt;

&lt;p&gt;1.Login to Microsoft Azure at &lt;a href="https://portal.azure.com" rel="noopener noreferrer"&gt;https://portal.azure.com&lt;/a&gt;&lt;br&gt;
2.From the Azure portal home page, in the search box, enter &lt;strong&gt;storage accounts&lt;/strong&gt;.&lt;br&gt;
3.Select &lt;strong&gt;storage accounts&lt;/strong&gt; under services.&lt;br&gt;
4.Select the storage account you created in the &lt;strong&gt;Prepare&lt;/strong&gt; exercise. The storage account &lt;strong&gt;name&lt;/strong&gt; is the hyperlink to the storage account. &lt;em&gt;(Note: it should be associated with the resource group &lt;strong&gt;guided-project-rg&lt;/strong&gt;).&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uo5f9osii5dpszrfcqp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uo5f9osii5dpszrfcqp.png" alt="storageacct" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.On the storage account blade, under the &lt;strong&gt;Data storage&lt;/strong&gt; submenu, select &lt;strong&gt;Containers&lt;/strong&gt;.&lt;br&gt;
6.Select &lt;strong&gt;+ Add container&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvtf8c20i75ugfz3fmr8w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvtf8c20i75ugfz3fmr8w.png" alt="container" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;7.In the &lt;strong&gt;Name&lt;/strong&gt; field, enter &lt;code&gt;storage-container&lt;/code&gt;.&lt;br&gt;
8.Select &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Great! With a storage container created, you can upload a blob to the container. Locate a picture that you can upload, either on your computer or from the internet, and save it locally to make uploading easier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Upload a file to the storage container&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1.Select the storage container you just created. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9f6hjrs5yb4643ocykf3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9f6hjrs5yb4643ocykf3.png" alt="container" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.Select &lt;strong&gt;Upload&lt;/strong&gt; and upload the file you prepared. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9icyqx2uliyi7h4ux9jl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9icyqx2uliyi7h4ux9jl.png" alt="upload" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.Once the file is ready for upload, select &lt;strong&gt;Upload&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;With the file uploaded, notice that the Access tier is displayed. For something we uploaded just for testing, it doesn’t need to be assigned to the &lt;strong&gt;Hot&lt;/strong&gt; access tier. In the next few steps, you’ll change the access tier for the file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Change the access tier
&lt;/h2&gt;

&lt;p&gt;1.Select the file you just uploaded (the file name is a hyperlink).&lt;br&gt;
2.Select &lt;strong&gt;Change tier&lt;/strong&gt;. Screenshot of menu for a blob storage item with Change tier highlighted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsyslmb6qnqz1ydqcqz7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsyslmb6qnqz1ydqcqz7.png" alt="changetier" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.Select &lt;strong&gt;Cold&lt;/strong&gt;.&lt;br&gt;
4.Select &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuiduyarba33nkfqy5iwv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuiduyarba33nkfqy5iwv.png" alt="cold" width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: You just changed the access tier for an individual blob or file. To change the default access tier for all blobs within the storage account, you could change it at the storage account level.&lt;/p&gt;

&lt;p&gt;5.Select &lt;strong&gt;Home&lt;/strong&gt; to return to the Azure portal home page.&lt;/p&gt;

&lt;p&gt;Good job! You’ve successfully uploaded a storage blob and changed the access tier from Hot to Cold. Next, you’ll work with file shares.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a file share
&lt;/h2&gt;

&lt;p&gt;1.From the Azure portal home page, in the search box, enter &lt;strong&gt;storage accounts&lt;/strong&gt;.&lt;br&gt;
2.Select &lt;strong&gt;storage accounts&lt;/strong&gt; under services.&lt;br&gt;
3.Select the storage account you created in the &lt;strong&gt;Prepare&lt;/strong&gt; exercise. The storage account &lt;strong&gt;name&lt;/strong&gt; is the hyperlink to the storage account. &lt;em&gt;(Note: it should be associated with the resource group &lt;strong&gt;guided-project-rg&lt;/strong&gt;.)&lt;/em&gt;&lt;br&gt;
4.On the storage account blade, under the &lt;strong&gt;Data storage&lt;/strong&gt; submenu, select &lt;strong&gt;File shares&lt;/strong&gt;.&lt;br&gt;
5.Select + &lt;strong&gt;File share&lt;/strong&gt;.&lt;br&gt;
6.On the Basics tab, in the name field enter &lt;code&gt;file-share&lt;/code&gt;.&lt;br&gt;
7.On the &lt;strong&gt;Backup&lt;/strong&gt; tab, uncheck &lt;strong&gt;Enable backup&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84hyt3mql8x09d7jatba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84hyt3mql8x09d7jatba.png" alt="uncheck" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;8.Select &lt;strong&gt;Review + create&lt;/strong&gt;.&lt;br&gt;
9.Select &lt;strong&gt;Create&lt;/strong&gt;.&lt;br&gt;
10.Once the file share is created, select &lt;strong&gt;Upload&lt;/strong&gt;.&lt;br&gt;
11.Upload the same file you uploaded to the blob storage or a different file, it’s up to you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pliinaroxrd5rfhnb55.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pliinaroxrd5rfhnb55.png" alt="newupload" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;12.Select &lt;strong&gt;Home&lt;/strong&gt; to return to the Azure portal home page.&lt;/p&gt;

&lt;p&gt;The next piece of the puzzle is figuring one way to control access to the files that have been uploaded. Azure has many ways to control files, including things like role-based access control. In this scenario, the Azure admin wants you to use shared access tokens or keys.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a shared access signature token
&lt;/h2&gt;

&lt;p&gt;1.From the Azure portal home page, in the search box, enter &lt;strong&gt;storage accounts&lt;/strong&gt;.&lt;br&gt;
2.Select &lt;strong&gt;storage accounts&lt;/strong&gt; under services.&lt;br&gt;
3.Select the storage account you created in the &lt;strong&gt;Prepare&lt;/strong&gt; exercise.&lt;br&gt;
4.On the storage account blade, select &lt;strong&gt;Storage browser&lt;/strong&gt;.&lt;br&gt;
5.Expand &lt;strong&gt;Blob containers&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Blob container is another name for the storage containers. Items uploaded to a storage container are called &lt;strong&gt;blobs&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;6.Select the storage container you created earlier, &lt;strong&gt;storage-container&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci5saoiowtmwjfx5byqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci5saoiowtmwjfx5byqs.png" alt="blob" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;7.Select the ellipses (three dots) on the end of the line for the image you uploaded. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywmzx3yogwagrjpl3bcz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywmzx3yogwagrjpl3bcz.png" alt="ellipse" width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;8.Select &lt;strong&gt;Generate SAS&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: When you generate a shared access signature, you set the duration. Once the duration is over, the link stops working. The &lt;strong&gt;Start&lt;/strong&gt; automatically populates with the current date and time.&lt;/p&gt;

&lt;p&gt;9.Set &lt;strong&gt;Signing method&lt;/strong&gt; to &lt;strong&gt;Account key.&lt;/strong&gt;&lt;br&gt;
10.Set &lt;strong&gt;Signing key&lt;/strong&gt; to &lt;strong&gt;Key 1&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;: There are two signing keys available. You can choose either one, or create SAS tokens with different durations.&lt;/p&gt;

&lt;p&gt;11.Set &lt;strong&gt;Stored access policy&lt;/strong&gt; to &lt;strong&gt;None&lt;/strong&gt;.&lt;br&gt;
12.Set &lt;strong&gt;Permissions&lt;/strong&gt; to &lt;strong&gt;Read&lt;/strong&gt;.&lt;br&gt;
13.Enter a custom start and expiry time or leave the defaults. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F305yts7q802pwgtzq7yn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F305yts7q802pwgtzq7yn.png" alt="SAS" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;14.Set &lt;strong&gt;Allowed protocols&lt;/strong&gt; to &lt;strong&gt;HTTPS only&lt;/strong&gt;.&lt;br&gt;
15.Select &lt;strong&gt;Generate SAS token and URL&lt;/strong&gt;.&lt;br&gt;
16.Copy the &lt;strong&gt;Blob SAS URL&lt;/strong&gt; and paste it in another window or tab of your browser. It should display the image you uploaded. Keep this tab or window open.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa65x78rqckwhrmnu8gbo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa65x78rqckwhrmnu8gbo.png" alt="url" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;: You can configure SAS tokens for files shares and blob containers using the same process.&lt;/p&gt;

&lt;p&gt;17.Select &lt;strong&gt;Home&lt;/strong&gt; to return to the Azure portal home page.&lt;/p&gt;

&lt;p&gt;With the SAS token created, anyone with that link can access the file for the duration that was set when you created the SAS token. However, controlling access to a resource or file is about more than just granting access. It’s also about being able to &lt;strong&gt;revoke access&lt;/strong&gt;. To revoke access with a SAS token, you need to invalidate the token. You invalidate the token by rotating the key that was used.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rotate access keys
&lt;/h2&gt;

&lt;p&gt;1.From the Azure portal home page, in the search box, enter &lt;strong&gt;storage accounts&lt;/strong&gt;.&lt;br&gt;
2.Select &lt;strong&gt;storage accounts&lt;/strong&gt; under services.&lt;br&gt;
3.Select the storage account you created in the &lt;strong&gt;Prepare&lt;/strong&gt; exercise.&lt;br&gt;
4.Expand the &lt;strong&gt;Security + networking&lt;/strong&gt; submenu.&lt;br&gt;
5.Select &lt;strong&gt;Access keys&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgf86876nas2qfhfsq60h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgf86876nas2qfhfsq60h.png" alt="access" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.For Key 1, select &lt;strong&gt;Rotate key&lt;/strong&gt;.&lt;br&gt;
7.Read and then acknowledge the warning about regenerating the access key by selecting &lt;strong&gt;Yes&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftj3t9b5wq9zxoarroxy0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftj3t9b5wq9zxoarroxy0.png" alt="rotatekey" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;8.Once you see the success message for rotating the access key, go back to the window or tab you used to check the SAS token and refresh the page. You should receive an authentication failed error.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2f3k110plt3q7b6c4iwp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2f3k110plt3q7b6c4iwp.png" alt="authentification" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations! You’ve completed this exercise. Return to Microsoft Learn to continue the guided project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaway
&lt;/h2&gt;

&lt;p&gt;By creating a storage container, uploading files, configuring file shares, and generating shared access signature tokens, you’ve learned how to manage and secure storage resources in &lt;strong&gt;Microsoft Azure&lt;/strong&gt;. You also explored how to revoke access by rotating access keys—an essential security practice. These hands-on tasks highlight the importance of balancing accessibility with strong access control when managing cloud-based storage systems.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cloudcomputing</category>
      <category>blobstorage</category>
      <category>cloudsecurity</category>
    </item>
    <item>
      <title>Microsoft Azure Management Tasks: Manage Virtual Machines</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Thu, 12 Mar 2026 10:23:33 +0000</pubDate>
      <link>https://forem.com/rahimah_dev/microsoft-azure-management-tasks-manage-virtual-machines-44a</link>
      <guid>https://forem.com/rahimah_dev/microsoft-azure-management-tasks-manage-virtual-machines-44a</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Managing cloud infrastructure efficiently is a core skill for modern cloud engineers and DevOps professionals. In &lt;strong&gt;Microsoft Azure&lt;/strong&gt;, virtual machines power everything from development environments to production servers. &lt;br&gt;
&lt;em&gt;This hands-on exercise demonstrates practical VM management tasks,including network migration, scaling compute resources, attaching storage, and automating shutdown—to ensure performance, security, and cost efficiency in real-world cloud environments&lt;/em&gt;.&lt;br&gt;
In this exercise, you’ll complete several tasks related to managing virtual machines.&lt;/p&gt;

&lt;p&gt;This exercise should take approximately &lt;strong&gt;10&lt;/strong&gt; minutes to complete.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;With the network settings updated to support segmenting the Linux virtual machine, you’re ready to manage the virtual machine itself. The first thing the Azure admin asks you to complete is moving the virtual machine to the new subnet you created in the previous exercise(Update Virtual Network).&lt;/p&gt;

&lt;h2&gt;
  
  
  Move the virtual machine network to the new subnet
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Login to Microsoft Azure at &lt;a href="https://portal.azure.com" rel="noopener noreferrer"&gt;https://portal.azure.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;From the Azure portal home page, in the search box, enter &lt;strong&gt;virtual machines&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;virtual machines&lt;/strong&gt; under services.&lt;/li&gt;
&lt;li&gt;Select the &lt;em&gt;guided-project-vm&lt;/em&gt; virtual machine. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flr8s9e7a5uoilaynf4re.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flr8s9e7a5uoilaynf4re.png" alt="vm" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.If the virtual machine is running, select &lt;strong&gt;Stop&lt;/strong&gt;.&lt;br&gt;
&lt;strong&gt;Note&lt;/strong&gt;: In order to make some configuration changes, such as changing the subnet, the VM will need to be restarted. You can request the change without stopping the VM, but Azure will force a restart before completing the change.&lt;/p&gt;

&lt;p&gt;6.Wait for the &lt;strong&gt;Status&lt;/strong&gt; field to update and show &lt;strong&gt;Stopped (deallocated)&lt;/strong&gt;.&lt;br&gt;
7.Within the &lt;strong&gt;Networking&lt;/strong&gt; subsection of the menu, select &lt;strong&gt;Network settings&lt;/strong&gt;.&lt;br&gt;
8.Select the Network interface / IP configuration hyperlink for the VM. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4hgavej4lomwj1jkmhh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4hgavej4lomwj1jkmhh.png" alt="networking" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;9.On the &lt;strong&gt;IP Configurations&lt;/strong&gt; page, update the &lt;strong&gt;Subnet&lt;/strong&gt; to &lt;em&gt;ftpSubnet&lt;/em&gt;.&lt;br&gt;
10.Select &lt;strong&gt;Apply&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhd8i3w7g69lvbdb50ttj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhd8i3w7g69lvbdb50ttj.png" alt="Ipconfig" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;11.Select &lt;strong&gt;Home&lt;/strong&gt; to return to the Azure portal home page.&lt;/p&gt;

&lt;p&gt;Good job! You’ve migrated the VM from one subnet to another. Remember, the new subnet had specific network security rules applied to help it function as an FTP server. The next task from the Azure admin relates to the computing power of the VM. The admin would like you to vertically scale the machine to increase the computing power.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vertically scale the virtual machine
&lt;/h2&gt;

&lt;p&gt;1.From the Azure portal home page, in the search box, enter &lt;strong&gt;virtual machines&lt;/strong&gt;.&lt;br&gt;
2.Select &lt;strong&gt;virtual machines&lt;/strong&gt; under services.&lt;br&gt;
3.Select the &lt;em&gt;guided-project-vm&lt;/em&gt; virtual machine.&lt;br&gt;
4.Locate the &lt;strong&gt;Availability + scale&lt;/strong&gt; submenu and select &lt;strong&gt;Size&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6vzw36hgul5irrl54o0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6vzw36hgul5irrl54o0.png" alt="size" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.Select a new VM size &lt;strong&gt;D2s_v5&lt;/strong&gt; for example. &lt;em&gt;(Note: If you don’t see the same size as shown in this exercise, select something similar.)&lt;/em&gt;&lt;br&gt;
6.Select &lt;strong&gt;Resize&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hn9ndsm9mpds5xw2tqb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hn9ndsm9mpds5xw2tqb.png" alt="resize" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: The VM size may not update in the Azure UI until the VM is restarted.&lt;/p&gt;

&lt;p&gt;7.Select &lt;strong&gt;Home&lt;/strong&gt; to return to the Azure portal home page.&lt;/p&gt;

&lt;p&gt;Well done. With the VM scaled up to a more robust processor, it can handle the new role it’s being assigned.&lt;/p&gt;

&lt;p&gt;However, now the Azure admin realizes that if the VM is going to server as an FTP server, it needs more storage. The Azure admin asked you to attach a new data disk to the VM.&lt;/p&gt;

&lt;h2&gt;
  
  
  Attach data disks to a virtual machine
&lt;/h2&gt;

&lt;p&gt;1.From the Azure portal home page, in the search box, enter &lt;strong&gt;virtual machines&lt;/strong&gt;.&lt;br&gt;
2.Select &lt;strong&gt;virtual machines&lt;/strong&gt; under services.&lt;br&gt;
3.Select the &lt;em&gt;guided-project-vm&lt;/em&gt; virtual machine.&lt;br&gt;
4.Locate the &lt;strong&gt;settings&lt;/strong&gt; submenu and select &lt;strong&gt;Disks&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pt84mjx3o2opxknpxsl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pt84mjx3o2opxknpxsl.png" alt="disc" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.Select &lt;strong&gt;Create and attach a new disk&lt;/strong&gt;.&lt;br&gt;
6.Leave LUN as default.&lt;br&gt;
7.Enter &lt;code&gt;ftp-data-disk&lt;/code&gt; for the &lt;strong&gt;Disk name&lt;/strong&gt;.&lt;br&gt;
8.Leave the Storage type as default.&lt;br&gt;
9.Enter &lt;code&gt;20&lt;/code&gt; for the &lt;strong&gt;Size&lt;/strong&gt;.&lt;br&gt;
10.Select &lt;strong&gt;Apply&lt;/strong&gt; to create the new storage disk and attach the disk to the machine. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzz5chp1w3z32yi1t5dtj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzz5chp1w3z32yi1t5dtj.png" alt="resize" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fokae9lt1cni3txklci6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fokae9lt1cni3txklci6g.png" alt="notice" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;11.Select &lt;strong&gt;Home&lt;/strong&gt; to return to the Azure portal home page.&lt;/p&gt;

&lt;p&gt;Nice! Now the &lt;strong&gt;VM&lt;/strong&gt; has enough storage to handle some uploads.&lt;/p&gt;

&lt;p&gt;The final thing the Azure admin is concerned about is the cost of running the computer 24 hours a day. The first thing they’ll do every morning is start up the FTP server. However, they’d like you to configure it to automatically shutdown every day at 7 PM Coordinated Universal Time (UTC).&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure automatic shutdown on a virtual machine
&lt;/h2&gt;

&lt;p&gt;1.From the Azure portal home page, in the search box, enter &lt;strong&gt;virtual machines&lt;/strong&gt;.&lt;br&gt;
2.Select &lt;strong&gt;virtual machines&lt;/strong&gt; under services.&lt;br&gt;
3.Select the &lt;em&gt;guided-project-vm&lt;/em&gt; virtual machine.&lt;br&gt;
4.Under the &lt;strong&gt;Operations&lt;/strong&gt; submenu, select &lt;strong&gt;Auto-shutdown&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyqlodt09aspnsyiwyv2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyqlodt09aspnsyiwyv2.png" alt="auto" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.In order to let late uploads finish, set the &lt;strong&gt;Scheduled shutdown&lt;/strong&gt; to &lt;code&gt;7:15:00 PM&lt;/code&gt;.&lt;br&gt;
6.Select &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwp82kwqamfj68zip2ve.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwp82kwqamfj68zip2ve.png" alt="save" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;7.Select &lt;strong&gt;Home&lt;/strong&gt; to return to the Azure portal home page.&lt;/p&gt;

&lt;p&gt;Congratulations! You’ve successfully completed all of the management tasks the Azure admin needed a hand with for the virtual machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Effective virtual machine management goes beyond simply deploying servers,it involves optimizing performance, securing network access, scaling resources when needed, and &lt;strong&gt;controlling operational costs&lt;/strong&gt;. By completing these tasks in &lt;strong&gt;Microsoft Azure&lt;/strong&gt;, you’ve demonstrated practical cloud administration skills that are essential for maintaining reliable and efficient infrastructure.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>virtualmachine</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Microsoft Azure Management Tasks: Update the Virtual Network</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Thu, 12 Mar 2026 09:53:17 +0000</pubDate>
      <link>https://forem.com/rahimah_dev/microsoft-azure-management-tasks-update-the-virtual-network-oai</link>
      <guid>https://forem.com/rahimah_dev/microsoft-azure-management-tasks-update-the-virtual-network-oai</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In real-world cloud environments, infrastructure rarely stays static. As workloads evolve, administrators must continuously adjust networks to support new services while maintaining security and efficiency. Imagine being asked to prepare the network for a new FTP server without disrupting existing virtual machines,that’s exactly the kind of task cloud engineers handle daily in &lt;strong&gt;Microsoft Azure&lt;/strong&gt;. &lt;br&gt;
&lt;em&gt;In this exercise, you’ll update an existing virtual network by creating a dedicated subnet, configuring a network security group, and restricting traffic to only what’s necessary, ensuring the new server operates securely within the infrastructure&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This exercise should take approximately &lt;strong&gt;8&lt;/strong&gt; minutes to complete.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Ensure you complete the &lt;strong&gt;Prepare&lt;/strong&gt; exercise before starting this exercise. If you haven’t completed the &lt;strong&gt;Prepare&lt;/strong&gt; exercise, the resources needed for this exercise will not be available.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;You’re helping an Azure Admin maintain resources. While you won’t be responsible for maintaining the entire infrastructure, the Admin will ask you to help out by completing certain tasks. Currently, there’s a Linux virtual machine (VM) that’s underutilized, and a need for a new Linux machine to serve as an FTP server. &lt;br&gt;
However, the Azure admin wants to be able to track network flow and resource utilization for the needed FTP server, so has asked you to start out by provisioning a new subnet. The current subnet should be left alone, as there are future plans for using it for additional VMs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a new subnet on an existing virtual network (vNet)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Login to Microsoft Azure at &lt;a href="https://portal.azure.com" rel="noopener noreferrer"&gt;https://portal.azure.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;From the Azure portal home page, in the search box, enter &lt;strong&gt;virtual networks&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;virtual networks&lt;/strong&gt; under services.&lt;/li&gt;
&lt;li&gt;Select the &lt;em&gt;guided-project-vnet&lt;/em&gt; virtual network. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftd3i1u6ylc8fadmlarnr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftd3i1u6ylc8fadmlarnr.png" alt="vnet" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.From the &lt;em&gt;guided-project-vnet&lt;/em&gt; blade, under settings, select Subnets. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ybbie1i61igcmi3d72b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ybbie1i61igcmi3d72b.png" alt="subnet" width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.To add a subnet, select &lt;strong&gt;+ Subnet&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zj08x45bgeoh299yogo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zj08x45bgeoh299yogo.png" alt="subnet" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;7.For &lt;strong&gt;Subnet purpose&lt;/strong&gt; leave it as &lt;strong&gt;Default&lt;/strong&gt;.&lt;br&gt;
8.For &lt;strong&gt;Name&lt;/strong&gt; enter: &lt;code&gt;ftpSubnet&lt;/code&gt;.&lt;br&gt;
9.Leave the rest of the settings alone and select &lt;strong&gt;Add&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6mdxrxog1kgnawqaelg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6mdxrxog1kgnawqaelg.png" alt="add" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;10.Select &lt;strong&gt;Home&lt;/strong&gt; to return to the Azure portal home page.&lt;/p&gt;

&lt;p&gt;Congratulations – you’ve completed the creation of a subnet. This subnet is only going to be used for SFTP traffic. To increase security, you need to configure a &lt;strong&gt;Network security group&lt;/strong&gt; to restrict which ports are allowed on the subnet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a network security group
&lt;/h2&gt;

&lt;p&gt;1.From the Azure portal home page, in the search box, enter &lt;strong&gt;virtual networks&lt;/strong&gt;.&lt;br&gt;
2.Select &lt;strong&gt;virtual networks&lt;/strong&gt; under services.&lt;br&gt;
3.Select &lt;strong&gt;Network security groups&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzfjdike20zqb83hs3nd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzfjdike20zqb83hs3nd.png" alt="nsg" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.Select &lt;strong&gt;+ Create&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4h3b36jwsmfgrmo01ecy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4h3b36jwsmfgrmo01ecy.png" alt="create" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.Verify the subscription is correct.&lt;br&gt;
6.Select the &lt;em&gt;guided-project-rg&lt;/em&gt; resource group.&lt;br&gt;
7.Enter &lt;code&gt;ftpNSG&lt;/code&gt; for the network security group name.&lt;br&gt;
8.Select &lt;strong&gt;Review + create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ex00uxgoxzyzjguf61v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ex00uxgoxzyzjguf61v.png" alt="review" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;9.Once the validation is complete, select &lt;strong&gt;Create&lt;/strong&gt;.&lt;br&gt;
10.Wait for the screen to refresh and display &lt;strong&gt;Your deployment is complete&lt;/strong&gt;.&lt;br&gt;
11.Select &lt;strong&gt;Go to resource&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqcglzgjwpii5wsg7t0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqcglzgjwpii5wsg7t0h.png" alt="goto" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an inbound security rule
&lt;/h2&gt;

&lt;p&gt;1.Under settings, select &lt;strong&gt;Inbound security rules&lt;/strong&gt;.&lt;br&gt;
2.Select &lt;strong&gt;+ Add&lt;/strong&gt;.&lt;br&gt;
3.Change the &lt;strong&gt;Destination port ranges&lt;/strong&gt; from 8080 to &lt;code&gt;22&lt;/code&gt;.&lt;br&gt;
4.Select &lt;strong&gt;TCP&lt;/strong&gt; for the protocol.&lt;br&gt;
5.Set the name to &lt;code&gt;ftpInbound&lt;/code&gt;.&lt;br&gt;
6.Select &lt;strong&gt;Add&lt;/strong&gt;.&lt;br&gt;
7.Select &lt;strong&gt;Home&lt;/strong&gt; to return to the Azure portal home page.&lt;/p&gt;

&lt;p&gt;Congratulations – you’ve created a new Network security group and configured rules to allow inbound FTP traffic. Now, you’ll need to associate the new network security group with the &lt;strong&gt;ftpSubnet&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Associate a network security group to a subnet
&lt;/h2&gt;

&lt;p&gt;1.From the Azure portal home page, in the search box, enter &lt;strong&gt;virtual networks&lt;/strong&gt;.&lt;br&gt;
2.Select &lt;strong&gt;virtual networks&lt;/strong&gt; under services.&lt;br&gt;
3.Select the &lt;strong&gt;guided-project-vnet&lt;/strong&gt; virtual network.&lt;br&gt;
4.Under settings, select &lt;strong&gt;Subnets&lt;/strong&gt;.&lt;br&gt;
5.Select the &lt;strong&gt;ftpSubnet&lt;/strong&gt; you created.&lt;br&gt;
6.On the Edit subnet page, under the Security section heading, update the Network security group field to &lt;strong&gt;ftpNSG&lt;/strong&gt;.&lt;br&gt;
7.Select &lt;strong&gt;Save&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrmtjkqskwperjwae7fq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrmtjkqskwperjwae7fq.png" alt="nsg" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nicely done. It looks like you’ve completed the work needed to prepare the network for shifting the current Linux VM to a new subnet that’s designed to handle incoming FTP traffic.&lt;/p&gt;

&lt;p&gt;Congratulations! You’ve completed this exercise. Next up is "Manage Virtual Machines".&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By creating a new subnet, configuring a network security group, and associating it with the subnet, you successfully prepared the network to support a secure FTP workload in &lt;strong&gt;Microsoft Azure&lt;/strong&gt;. This small but critical change demonstrates how proper network segmentation and security rules help maintain organized, scalable, and secure cloud environments.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cloudcomputing</category>
      <category>networking</category>
      <category>cloudsecurity</category>
    </item>
    <item>
      <title>Microsoft Azure Management Tasks: Prepare your Environment</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Thu, 12 Mar 2026 09:33:42 +0000</pubDate>
      <link>https://forem.com/rahimah_dev/microsoft-azure-management-tasks-prepare-your-environment-2234</link>
      <guid>https://forem.com/rahimah_dev/microsoft-azure-management-tasks-prepare-your-environment-2234</guid>
      <description>&lt;h2&gt;
  
  
  Intoduction
&lt;/h2&gt;

&lt;p&gt;Cloud infrastructure is only as reliable as the environment it is built on. Before deploying applications or managing workloads in &lt;strong&gt;Microsoft Azure&lt;/strong&gt;, properly preparing your environment is a critical first step. &lt;br&gt;
&lt;em&gt;Setting up essential resources such as resource groups, virtual networks, virtual machines, and storage accounts ensures that your cloud infrastructure is organized, scalable, and secure&lt;/em&gt;.&lt;br&gt;
This preparation phase lays the foundation for efficient management, easier troubleshooting, and &lt;strong&gt;controlled cloud costs&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: This guided project requires an active Azure subscription. &lt;em&gt;Where possible&lt;/em&gt;, follow recommended naming conventions to make it easier to clean up the resource for this project at the end. Creating and using Azure resources for this project may increase your Azure costs.&lt;/p&gt;

&lt;p&gt;In the prepare exercise, you set up the environment to complete the rest of the steps in the &lt;strong&gt;Microsoft Azure Management Tasks&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This exercise should take approximately &lt;strong&gt;15&lt;/strong&gt; minutes to complete.&lt;/p&gt;

&lt;h2&gt;
  
  
  Need an Azure account?
&lt;/h2&gt;

&lt;p&gt;If you already have a Microsoft Azure account to use for this lab, skip to &lt;strong&gt;Login to Microsoft Azure&lt;/strong&gt;. If you need to create an Azure account, complete the following steps.&lt;br&gt;
    1. Go to the &lt;code&gt;Azure free account&lt;/code&gt; page.&lt;br&gt;
    2. Select &lt;strong&gt;Try Azure for free&lt;/strong&gt;&lt;br&gt;
    3. Complete the sign-up process for an Azure account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Login to Microsoft Azure
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Login to Microsoft Azure at &lt;a href="https://portal.azure.com" rel="noopener noreferrer"&gt;https://portal.azure.com&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Create a resource group
&lt;/h2&gt;

&lt;p&gt;In order to make clean-up easy at the end, start with creating a new resource group to hold the resources for this guided project. Using resource groups to organize things is a quick way to ensure you can manage resources when a project is over.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From the Azure portal home page, in the search box, enter &lt;strong&gt;resource groups&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Resource groups&lt;/strong&gt; under services. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ozbudk2j4gl5hey11vk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ozbudk2j4gl5hey11vk.png" alt="rg" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: Take note of other resource groups that are already created. During clean up, you want to avoid deleting resource groups that were already here. Pay special attention for a resource group called NetworkWatcherRG. If it doesn’t already exist, the NetworkWatcherRG will be created during this guided project and should be deleted at the end. &lt;strong&gt;However&lt;/strong&gt;, if the NetworkWatcherRG already exists prior to starting this project, you should &lt;strong&gt;NOT&lt;/strong&gt; delete it at the end. It may be helpful to take a screenshot of resource groups that exist before you create the group for the guided project.&lt;/p&gt;

&lt;p&gt;3.Select &lt;strong&gt;Create&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0zjsgly7vpspb7q2z1z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0zjsgly7vpspb7q2z1z.png" alt="create" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Your subscription should already be selected. If you have multiple Azure subscriptions associated with this login, select the one you’d like to use for the guided project.&lt;/p&gt;

&lt;p&gt;4.Enter &lt;code&gt;guided-project-rg&lt;/code&gt; in the &lt;strong&gt;Resource group name&lt;/strong&gt; field.&lt;br&gt;
5.The &lt;strong&gt;Region&lt;/strong&gt; field will automatically populate. Leave the default value.&lt;br&gt;
6.Select &lt;strong&gt;Review + create&lt;/strong&gt;.&lt;br&gt;
7.Select &lt;strong&gt;Create&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsy3eu2r6w04nfjrc9ur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsy3eu2r6w04nfjrc9ur.png" alt="create" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;8.Return to the home page of the Azure portal by selecting &lt;strong&gt;Home&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rppx9grufh2l5e420lu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rppx9grufh2l5e420lu.png" alt="home" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a virtual network with one subnet
&lt;/h2&gt;

&lt;p&gt;1.From the Azure portal home page, in the search box, enter &lt;strong&gt;virtual networks&lt;/strong&gt;.&lt;br&gt;
2.Select &lt;strong&gt;virtual networks&lt;/strong&gt; under services. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9m3jt3j98gc8m51ro1vi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9m3jt3j98gc8m51ro1vi.png" alt="vn" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.Select &lt;strong&gt;Create&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6kpnyikugrhminhvmq7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6kpnyikugrhminhvmq7.png" alt="create" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The subscription and resource group should automatically fill in. Verify that the information filled in matches the correct subscription and the new resource group created for the guided project (guided-project-rg if you’re following along with the naming conventions).&lt;/p&gt;

&lt;p&gt;4.Scroll down to the &lt;strong&gt;Instance details&lt;/strong&gt; section and enter &lt;code&gt;guided-project-vnet&lt;/code&gt; for the Virtual network name.&lt;br&gt;
5.Select &lt;strong&gt;Review + create&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2lt10w8kxc4ftva2uvym.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2lt10w8kxc4ftva2uvym.png" alt="review" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.Select &lt;strong&gt;Create&lt;/strong&gt;.&lt;br&gt;
7.Wait for the screen to refresh and show &lt;strong&gt;Your deployment is complete&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf7m74c6hkf5kdv7qktx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf7m74c6hkf5kdv7qktx.png" alt="complete" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;8.Select &lt;strong&gt;Home&lt;/strong&gt; to return to the Azure portal home page.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a virtual machine
&lt;/h2&gt;

&lt;p&gt;1.From the Azure portal home page, in the search box, enter &lt;strong&gt;virtual machines&lt;/strong&gt;.&lt;br&gt;
2.Select &lt;strong&gt;virtual machines&lt;/strong&gt; under services. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjje18tb2h9hgre14ezho.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjje18tb2h9hgre14ezho.png" alt="vm" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.Select &lt;strong&gt;Create&lt;/strong&gt; and then select &lt;strong&gt;Virtual machine&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjwawxa736hvxvxdtdgm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjwawxa736hvxvxdtdgm.png" alt="vm" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The subscription should automatically fill in. Verify that the information filled in matches the correct subscription.&lt;/p&gt;

&lt;p&gt;4.Select &lt;strong&gt;guided-project-rg&lt;/strong&gt; for the &lt;strong&gt;Resource group&lt;/strong&gt;.&lt;br&gt;
5.Enter &lt;code&gt;guided-project-vm&lt;/code&gt; for the &lt;strong&gt;Virtual machine name&lt;/strong&gt;.&lt;br&gt;
6.For the &lt;strong&gt;Image&lt;/strong&gt;, select one of the &lt;strong&gt;Ubuntu Server&lt;/strong&gt; options. (For example, Ubuntu Server 24.04 LTS - x64 Gen2)&lt;br&gt;
7.Continue further on the &lt;strong&gt;Basics&lt;/strong&gt; page to the &lt;strong&gt;Administrator account&lt;/strong&gt; section.&lt;br&gt;
8.Select &lt;strong&gt;Password&lt;/strong&gt; for authentication type.&lt;br&gt;
9.Enter &lt;code&gt;guided-project-admin&lt;/code&gt; for the admin &lt;strong&gt;Username&lt;/strong&gt;.&lt;br&gt;
10.Enter a password for the admin account.&lt;br&gt;
11.Confirm the password for the admin account.&lt;br&gt;
12.Leave the rest of the settings as default settings. You can review the settings if you like, but shouldn’t change any.&lt;br&gt;
13.Select &lt;strong&gt;Review + create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Once validation has passed, you’ll receive a cost estimate of how much it will cost per hour to run the VM.&lt;/p&gt;

&lt;p&gt;14.Select &lt;strong&gt;Create&lt;/strong&gt; to confirm the resource cost and create the virtual machine.&lt;br&gt;
15.Select &lt;strong&gt;Home&lt;/strong&gt; to return to the Azure portal home page.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Storage account
&lt;/h2&gt;

&lt;p&gt;1.From the Azure portal home page, in the search box, enter &lt;strong&gt;storage accounts&lt;/strong&gt;.&lt;br&gt;
2.Select &lt;strong&gt;Storage accounts&lt;/strong&gt; under services. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok1wi6bgxcb17r9puu5o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok1wi6bgxcb17r9puu5o.png" alt="sa" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.Select &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The subscription and resource group should automatically fill in. Verify that the information filled in matches the correct subscription and the new resource group created for the guided project (guided-project-rg if you’re following along with the naming conventions).&lt;/p&gt;

&lt;p&gt;4.Scroll down to the &lt;strong&gt;Instance details&lt;/strong&gt; section and enter a name for the storage account. Storage accounts must be globally unique, so you may have to try a few different times to get a storage account name.&lt;br&gt;
5.Select &lt;strong&gt;Review + create&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3omiy6fr2bymj9fysqws.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3omiy6fr2bymj9fysqws.png" alt="reviewncreate" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.Select &lt;strong&gt;Create&lt;/strong&gt;.&lt;br&gt;
7.Wait for the screen to refresh and show &lt;strong&gt;Your deployment is complete.&lt;/strong&gt;&lt;br&gt;
8.Select &lt;strong&gt;Home&lt;/strong&gt; to return to the Azure portal home page.&lt;/p&gt;

&lt;p&gt;Congratulations! You’ve completed the &lt;strong&gt;Prepare&lt;/strong&gt; exercise. Next, you'll update the virtual network.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In conclusion&lt;/strong&gt;, Preparing your Azure environment is more than just a setup task—it is a strategic step toward effective cloud management. By organizing resources properly and establishing core infrastructure early, you create a stable foundation for deploying, scaling, and maintaining workloads in &lt;strong&gt;Microsoft Azure&lt;/strong&gt; with confidence and efficiency.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cloudcomputing</category>
      <category>virtualmachine</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
