<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Vakul Keshav</title>
    <description>The latest articles on Forem by Vakul Keshav (@vakul_keshav_46acdf8d9aaf).</description>
    <link>https://forem.com/vakul_keshav_46acdf8d9aaf</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/vakul_keshav_46acdf8d9aaf"/>
    <language>en</language>
    <item>
      <title>Building My Homelab: The Easiest Way to SSH Remotely</title>
      <dc:creator>Vakul Keshav</dc:creator>
      <pubDate>Fri, 19 Sep 2025 10:19:59 +0000</pubDate>
      <link>https://forem.com/vakul_keshav_46acdf8d9aaf/building-my-homelab-the-easiest-way-to-ssh-remotely-p2d</link>
      <guid>https://forem.com/vakul_keshav_46acdf8d9aaf/building-my-homelab-the-easiest-way-to-ssh-remotely-p2d</guid>
      <description>&lt;p&gt;When I first started building my homelab, one of the things I wanted was a way to access it from anywhere. The obvious choice that people talk about is port forwarding. But here’s the catch: I don’t even have a router, I rely on a mobile hotspot for internet. And mobile hotspots don’t give you a real public IP address, so there’s no way to forward ports.&lt;/p&gt;

&lt;p&gt;Even if I did have a router, I quickly realized that port forwarding isn’t exactly the safest option. It literally opens up a port on your machine to the whole world, and that means anyone out there could try to knock on it. For a beginner like me, that sounded more like trouble than learning.&lt;/p&gt;

&lt;p&gt;That’s when I came across Tailscale. What clicked for me is how simple it makes the whole thing no worrying about public IPs, no messing with routers, and no exposing ports. It just quietly sets up a secure, private connection between my devices, almost like magic. Suddenly, SSH into my homelab from anywhere went from “complicated and risky” to “easy and safe.”&lt;/p&gt;

&lt;p&gt;In this blog i will tell how i setup ssh into my homelab using tailscale.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tailscale setup for the windows laptop.
&lt;/h2&gt;

&lt;p&gt;The first step is to create an account to the tailscale and download it on your windows laotop (this windows laptop is going to be the host through i will access my linux laptop), you can download the tailscale from &lt;a href="https://tailscale.com/download/windows" rel="noopener noreferrer"&gt;this&lt;/a&gt; link.&lt;/p&gt;

&lt;p&gt;When you connect to the tailscale it will open below window.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4h6nah58lt485wjmkhlk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4h6nah58lt485wjmkhlk.png" alt=" " width="795" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Click the connect button from the above image and in your admin panel, you will see there is a machine connected, like in below image and it requires approval as i have configured that i will manually approval for a machine to connect, the reason for this is because it provides me flexibility to configure the connection and configure the IP as i want. If you want to apply the manual approval setting then go to the settings -&amp;gt; device management -&amp;gt; enable manual approval.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Coming to the admin panel, click on the three dots and you will see the option to change the IP of the machine there and if you want to configure the IP, you can do it as below, if you want to read more about the IP address and the CGNAT that tailscale uses, you can refer &lt;a href="https://tailscale.com/kb/1015/100.x-addresses" rel="noopener noreferrer"&gt;this&lt;/a&gt; documentation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7sqsl5yf4kg9u6ucdl8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7sqsl5yf4kg9u6ucdl8.png" alt=" " width="523" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;So what i do is, i like to keep the last two octets of my tailnet similar to my machines private ipv4, you can verify both using the image below.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozefr5t7yud14lmatlvv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozefr5t7yud14lmatlvv.png" alt=" " width="705" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After updating the IP, you need to approve the connection, so approve by clicking on the three dots then approve.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tailscale setup for the linux homelab
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Go to the download page and download the tailscale for the linux, you can refer the above provided link to go to the download page or use the below script
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -fsSL https://tailscale.com/install.sh | sh

sudo apt install tailscale
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To connect the tailscale to your network use &lt;code&gt;sudo tailscale up&lt;/code&gt; command and visit the login link that is provided to you.&lt;/li&gt;
&lt;li&gt;If above command give error as tailscale.service not found that means you have to start the service and for that you can run &lt;code&gt;sudo systemctl start tailscaled&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;when you login, it will again show the connect page, you have to connect to the same tailnet as that of your windows (use same email for signin).&lt;/li&gt;
&lt;li&gt;In your console, you will see two machines now, you can modify the ip of your new machine and then approve it and you setup will look like below.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh294n4l8v6o1adraqqvk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh294n4l8v6o1adraqqvk.png" alt=" " width="800" height="115"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To verify if the linux machine is connected, you can do &lt;code&gt;ip a&lt;/code&gt; and there will be an entry for the tailscale with ip that you just set.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Enabling ssh to homelab
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt; In a terminal window on the homwlab, run the tailscale set command to advertise SSH for that VM:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tailscale set --ssh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Open the &lt;a href="https://login.tailscale.com/admin/acls/file" rel="noopener noreferrer"&gt;Access Controls&lt;/a&gt; page of the Tailscale admin console and add the following lines to your tailnet policy file to allow network connectivity to the VM:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"grants": [
   {
      "src": ["yoursigninemail@gmail.com"],
      "dst": ["100.78.10.1"],
      "ip": ["22"]
   }
]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In the same tab, add the following rules to the SSH section of your tailnet policy file to allow SSH access to the VM:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"ssh": [
           { "action": "accept",
             "src": ["yoursigninemail@gmail.com"],
             "dst": ["autogroup:self"],
             "users": ["root","autogroup:nonroot", "&amp;lt;your-local-username&amp;gt;"]
           }
       ],
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To see  (in my case local-user) run the following command in the homelab terminal: &lt;code&gt;whoami&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Access HomeLab from the windows machine
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Now you can access your homelab from you windows terminal using the following command: &lt;code&gt;ssh local-user@100.64.65.66&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;I am accessing using the git bash in my windows and you can see the final results below.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkm6xf074ta0pdnjweu6x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkm6xf074ta0pdnjweu6x.png" alt=" " width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>networking</category>
      <category>security</category>
      <category>homelab</category>
    </item>
    <item>
      <title>Kubernetes Basics: A Beginner’s Guide to Container Orchestration</title>
      <dc:creator>Vakul Keshav</dc:creator>
      <pubDate>Thu, 14 Aug 2025 14:21:33 +0000</pubDate>
      <link>https://forem.com/vakul_keshav_46acdf8d9aaf/kubernetes-basics-a-beginners-guide-to-container-orchestration-12k8</link>
      <guid>https://forem.com/vakul_keshav_46acdf8d9aaf/kubernetes-basics-a-beginners-guide-to-container-orchestration-12k8</guid>
      <description>&lt;h2&gt;
  
  
  Prerequisite
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Docker or any Container service&lt;/li&gt;
&lt;li&gt;Basic linux understanding with commands&lt;/li&gt;
&lt;li&gt;Networking basics like: IP addresses, ports, protocols, DNS.&lt;/li&gt;
&lt;li&gt;Version Control like GIT&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Modern applications don’t live on a single server anymore. They run across many environments that could be your laptop, cloud servers, or even multiple clouds. Containers make this possible by packaging apps with everything they need to run.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; when you have hundreds or thousands of containers, how do you start them, keep them running, scale them, and make sure they can talk to each other?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;That’s where Kubernetes (K8s) comes in. Think of it as an air traffic controller for containers as it decides where containers should run, monitors their health, and makes sure your applications are always available.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Kubernetes?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;K8s solves real deployment problems that were headaches. They are below:

&lt;ul&gt;
&lt;li&gt;Portability → Run anywhere: local, cloud, or hybrid.&lt;/li&gt;
&lt;li&gt;Self-healing → If a container crashes, Kubernetes restarts it automatically.&lt;/li&gt;
&lt;li&gt;Scalability → Automatically adds more containers when load increases.&lt;/li&gt;
&lt;li&gt;Observability → Built-in monitoring and logging tools&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Before Kubernetes, deploying an app meant manually setting up servers, load balancers, and networking. Today, Kubernetes handles all that automatically.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Before vs After Kubernetes
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Before:&lt;/strong&gt;&lt;br&gt;
Imagine you had to deploy an app on three servers. You’d SSH into each, start the app, set up load balancing, and hope nothing crashed. If a server went down, you’d have to fix it manually. So much to do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After:&lt;/strong&gt;&lt;br&gt;
You tell Kubernetes how many copies of your app you want, and it does the rest from scheduling pods to handling traffic, and even replacing failed instances automatically.&lt;/p&gt;
&lt;h2&gt;
  
  
  Kubernetes Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyhakenmr8gth1vuzw5q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyhakenmr8gth1vuzw5q.png" alt="Diagram showing Kubernetes architecture with control plane and worker nodes" width="800" height="554"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Kubernetes Cluster
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;In the last part we discussed that before k8 we have to manually setup everything across all the machines. But after k8 those machines comes under k8 and form a group called &lt;strong&gt;kubernetes cluster.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;These machines, whether physical or virtual, are called &lt;strong&gt;nodes.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Kubernetes manages them as one unified system, so you don’t have to think about each server individually.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It has two main parts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Control Plane:&lt;/strong&gt; The brain of Kubernetes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Worker Nodes:&lt;/strong&gt; The hands that do the actual work of running your applications.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Control Plane (The Brain)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The control plane makes global decisions about the cluster like scheduling, scaling, and responding to failures. Below are the key components of the control plane:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;API Server:&lt;/strong&gt; Acts as the interface for managing the cluster and communicate with all components.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scheduler:&lt;/strong&gt; Decides which node should run each workload.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Controller Manager:&lt;/strong&gt; Handles background tasks such as maintaining node health, scaling and other cluster-wide operations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;etcd:&lt;/strong&gt; A key-value store that stores all cluster data, including configuration and state.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Worker Nodes (The Hands)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Worker nodes are where your applications actually run.&lt;/li&gt;
&lt;li&gt;Each worker node has:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl:&lt;/strong&gt; Communicates with the control plane and ensures that the containers for a pod are running on that node.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Container Runtime:&lt;/strong&gt; The engine that runs containers (e.g., containerd, CRI-O). Yes docker is not here.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kube-proxy:&lt;/strong&gt; Manages networking rules for communication between pods and services.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;If the control plane is the brain, worker nodes are the muscle that executes its instructions.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Pods: The Smallest Deployable Unit
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes does not run containers directly, it runs pods.
A pod is:

&lt;ul&gt;
&lt;li&gt;A wrapper around one or more containers. (like in the image)&lt;/li&gt;
&lt;li&gt;Shared environment: network namespace, storage volumes.&lt;/li&gt;
&lt;li&gt;Temporary: If a pod fails, Kubernetes can replace it automatically.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A pod might have your main application running and there is also another container (very specific use cases) for logging.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  kubectl: Kubernetes Command-Line Tool
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;kubectl is the primary way to interact with a Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;It talks to the API Server to create, inspect, and manage resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# See all nodes in your cluster
kubectl get nodes

# Create a pod running Nginx
kubectl run nginx --image=nginx

# See all pods
kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;So Whats happening when you run &lt;strong&gt;kubectl run nginx --image=nginx&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;kubectl sends the request to the API Server.&lt;/li&gt;
&lt;li&gt;The Scheduler chooses a worker node to run the pod.&lt;/li&gt;
&lt;li&gt;The kubelet on that node pulls the image (nginx) and starts the container.&lt;/li&gt;
&lt;li&gt;The pod runs until it’s stopped, deleted, or replaced.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Installing Kubernetes
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcisqar0xcfdo85rvnunc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcisqar0xcfdo85rvnunc.png" alt=" " width="800" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For now i am personally using docker desktop to start kubernetes which spins up a single node kubernetes cluster.

&lt;ul&gt;
&lt;li&gt;Open docker desktop and go to settings and there you will see kubernetes option, click it.&lt;/li&gt;
&lt;li&gt;Check the "Enable Kubernetes" option and you are good to go.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;This is a great way to start practicing k8s.&lt;/li&gt;

&lt;li&gt;You can also use minikube which is a single-node kubernetes cluster and great for learning.

&lt;ul&gt;
&lt;li&gt;Refer &lt;a href="https://minikube.sigs.k8s.io/docs/start/?arch=%2Fwindows%2Fx86-64%2Fstable%2F.exe%20download" rel="noopener noreferrer"&gt;this&lt;/a&gt; for installing minikube on windows, linux and mac.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>beginners</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Building Scalable CI/CD Pipelines with Azure DevOps, Docker, and Private NPM Packages</title>
      <dc:creator>Vakul Keshav</dc:creator>
      <pubDate>Wed, 06 Aug 2025 16:07:12 +0000</pubDate>
      <link>https://forem.com/vakul_keshav_46acdf8d9aaf/building-scalable-cicd-pipelines-with-azure-devops-docker-and-private-npm-packages-3m7f</link>
      <guid>https://forem.com/vakul_keshav_46acdf8d9aaf/building-scalable-cicd-pipelines-with-azure-devops-docker-and-private-npm-packages-3m7f</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pzgjn4meuvqs4136xol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pzgjn4meuvqs4136xol.png" alt=" " width="800" height="624"&gt;&lt;/a&gt;Over the past few days, I designed and implemented a robust CI/CD pipeline from scratch, tackling the challenges of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Integrating Docker builds with private NPM registries (Azure Artifacts)&lt;/li&gt;
&lt;li&gt;Managing secure, token-based authentication inside Docker containers&lt;/li&gt;
&lt;li&gt;Automating deployments for a seamless developer experience&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One key challenge was handling private NPM package authentication during Docker builds without exposing sensitive tokens. After multiple iterations, I designed a scalable approach using Azure DevOps Pipelines, Azure Key Vault for secrets management, and dynamically injecting .npmrc during runtime (I will show this later).&lt;/p&gt;

&lt;p&gt;For private npm registry i am using azure artifacts, and &lt;strong&gt;if you want to know how to integrate azure artifacts as npm registry&lt;/strong&gt; then you can refer &lt;a href="https://learn.microsoft.com/en-us/azure/devops/artifacts/get-started-npm?view=azure-devops" rel="noopener noreferrer"&gt;this&lt;/a&gt; official documentation and if you want to know how to &lt;strong&gt;publish your first package to azure artifacts&lt;/strong&gt; then you can refer &lt;a href="https://learn.microsoft.com/en-us/azure/devops/artifacts/get-started-npm?view=azure-devops" rel="noopener noreferrer"&gt;this&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dockerfile for securely integrating private npm packages
&lt;/h2&gt;

&lt;p&gt;When working with private NPM registries (like Azure Artifacts), integrating authentication into Docker builds can be tricky. A naive approach of passing tokens through ARG or ENV leads to token leakage in image layers, posing a significant security risk. Here's how I tackled this problem by dynamically generating the .npmrc during build-time without exposing sensitive information in the final image. If you want to know how to generate pat token in azure devops then you can refer &lt;a href="https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&amp;amp;tabs=Windows" rel="noopener noreferrer"&gt;this&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Stage 1: Build Stage
FROM node:22-alpine AS builder

WORKDIR /app
ARG NPM_AUTH_TOKEN

# Dynamically create .npmrc to authenticate with private NPM registry
RUN echo "registry=https://pkgs.dev.azure.com/{organization-name}/{project-name}/_packaging/{feed-name}/npm/registry/" &amp;gt; .npmrc &amp;amp;&amp;amp; \
    echo "always-auth=true" &amp;gt;&amp;gt; .npmrc &amp;amp;&amp;amp; \
    echo "//pkgs.dev.azure.com/{organization-name}/{project-name}/_packaging/{feed-name}/npm/registry/:username={organization-name}" &amp;gt;&amp;gt; .npmrc &amp;amp;&amp;amp; \
    echo "//pkgs.dev.azure.com/{organization-name}/{project-name}/_packaging/{feed-name}/npm/registry/:_password=${NPM_AUTH_TOKEN}" &amp;gt;&amp;gt; .npmrc &amp;amp;&amp;amp; \
    echo "//pkgs.dev.azure.com/{organization-name}/{project-name}/_packaging/{feed-name}/npm/registry/:email=npm requires email to be set but doesn't use the value" &amp;gt;&amp;gt; .npmrc

# Copy package files
COPY package.json ./

# Install dependencies
RUN npm install

# Copy application source code
COPY . .

# Generate Prisma Client
RUN npx prisma generate

# Clean up sensitive files
RUN rm -rf .npmrc

# Stage 2: Runtime Stage
FROM node:22-alpine

WORKDIR /app

# Copy built application from builder stage
COPY --from=builder /app /app

EXPOSE 3000

CMD ["node", "src/app.js"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;When working with private NPM registries like Azure Artifacts, one common challenge is authenticating the Docker build process without leaking sensitive tokens into the final image. To solve this, I passed the NPM Auth Token as a build argument and generated an .npmrc file on-the-fly during the build stage, I also tried creating a .npmrc.docker file to keep the docker code clean and copy it but for some reason it was not taking the token, so i went with the current approach. After installing the dependencies, I made sure to delete the .npmrc file, &lt;strong&gt;ensuring that no secrets get persisted into the final runtime image.&lt;/strong&gt;&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To keep the image clean and secure, I used a multi-stage Docker build the first stage builds the app and installs dependencies, while the second stage only copies the necessary build artifacts. This approach ensures that devDependencies, build caches, and sensitive files never reach production, keeping the image lightweight and secure.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Automating Docker Builds &amp;amp; VM Deployments with Azure DevOps Pipelines (CI/CD Workflow)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;After containerizing the application securely, the next step was to automate the entire build → push → deploy workflow using Azure DevOps Pipelines. The CI/CD pipeline I designed builds the Docker image, pushes it to Azure Container Registry (ACR), and then deploys it to an Azure Virtual Machine which also acts like self hosted agent for CD part (discussed it later).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;One tricky part was handling environment variables securely during deployment. Instead of hardcoding them, I dynamically created a .env file on the VM during the deployment stage. The pipeline also ensures zero downtime deployments by stopping old containers, cleaning up stale files, pulling the latest image, and running it with updated configurations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I'll talk about each step in a little bit&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;trigger:
- none

pool:
  vmImage: ubuntu-latest

variables:
- group: Backend-Auth-Variables  # Azure DevOps Variable Group for secrets

stages:
# Build Stage
- stage: Build
  displayName: Build and Push Docker Image
  jobs:
  - job: BuildAndPushImage
    displayName: Build and Push Docker Image
    steps:
    - task: Bash@3
      inputs:
        targetType: 'inline'
        script: | 
          echo "NPM_AUTH_TOKEN starts with: $(NPM-AUTH-TOKEN:0:4)..."
          docker build --build-arg NPM_AUTH_TOKEN=$(NPM-AUTH-TOKEN) -t &amp;lt;your-acr-name&amp;gt;.azurecr.io/backend-auth:$(Build.BuildId) .

    - task: Docker@2
      inputs:
        containerRegistry: 'ACR-Service-Connection'  # Azure DevOps Service Connection to ACR
        repository: 'backend-auth'
        command: 'push'

# Deploy Stage
- stage: Deploy
  displayName: Deploy to Azure VM
  dependsOn: Build
  jobs:
  - deployment: DeployApp
    displayName: SSH into VM and Deploy Container
    environment: 
      name: Azure_VM_Environment
      resourceName: backend-vm
    strategy:
      runOnce:
        deploy:
          steps:
          - task: Bash@3
            inputs:
              targetType: 'inline'
              script: |
                  #!/bin/bash
                  TARGET_DIR="$HOME/$(Build.Repository.Name)"

                  ENV_FILE_CONTENT="
                    DATABASE_URL=$(DATABASE-URL)
                    API_KEY_SERVICE_1=$(API-KEY-SERVICE-1)
                    API_KEY_SERVICE_2=$(API-KEY-SERVICE-2)
                    JWT_SECRET=$(JWT-SECRET)
                    EMAIL_USER=$(EMAIL-USER)
                    EMAIL_PASS=$(EMAIL-PASS)
                    OAUTH_CLIENT_ID=$(OAUTH-CLIENT-ID)
                    OAUTH_CLIENT_SECRET=$(OAUTH-CLIENT-SECRET)
                    FRONTEND_URL=http://localhost:3001
                    REDIS_PASSWORD=$(REDIS-PASSWORD)
                    REDIS_HOST=$(REDIS-HOST)
                    REDIS_USERNAME=$(REDIS-USERNAME)
                    REDIS_PORT=$(REDIS-PORT)"

                  DOCKER_IMAGE_NAME="&amp;lt;your-acr-name&amp;gt;.azurecr.io/backend-auth:$(Build.BuildId)"
                  CONTAINER_NAME="backend-auth-container"

                  if [ -d "$TARGET_DIR" ]; then
                      echo "Directory exists. Clearing files..."
                      rm -rf "$TARGET_DIR"/*
                  else
                      echo "Directory does not exist. Creating it..."
                      mkdir -p "$TARGET_DIR"
                  fi

                  echo "Creating .env file..."
                  echo "$ENV_FILE_CONTENT" &amp;gt; "$TARGET_DIR/.env"

                  if [ "$(docker ps -aq -f name=$CONTAINER_NAME)" ]; then
                    echo "Stopping and removing old container..."
                    docker rm -f $CONTAINER_NAME
                  else
                    echo "No old container found."
                  fi

                  echo "Pulling Image from ACR..."
                  docker pull $DOCKER_IMAGE_NAME

                  echo "Running new container..."
                  cd $TARGET_DIR
                  docker run -itd --name $CONTAINER_NAME --env-file .env -p 3000:3000 $DOCKER_IMAGE_NAME

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Service Connections: Secure Access to ACR &amp;amp; Azure VM&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;I created a Docker Registry Service Connection in Azure DevOps to authenticate and push images to Azure Container Registry (ACR).&lt;/li&gt;
&lt;li&gt;For deployment, I utilized a self-hosted agent running directly on the Azure Virtual Machine, which eliminated the need for any SSH-based service connections or additional setup complexities. The deploy stage in the pipeline simply executes Bash scripts on the same VM post-build, allowing for seamless container deployment and environment configuration without remote access overhead.&lt;/li&gt;
&lt;li&gt;You can refer &lt;a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/ecosystems/containers/publish-to-acr?view=azure-devops&amp;amp;tabs=javascript%2Cportal%2Cmsi" rel="noopener noreferrer"&gt;this&lt;/a&gt; for setting up self-hosted agent and docker service connection.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Pipeline Execution Flow:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Build &amp;amp; Push Stage

&lt;ul&gt;
&lt;li&gt;A Microsoft-hosted Ubuntu agent performs the Docker build.&lt;/li&gt;
&lt;li&gt;The NPM token (NPM_AUTH_TOKEN) is passed securely as a build argument to handle private package installations.&lt;/li&gt;
&lt;li&gt;The built Docker image is tagged with the current Build ID and pushed to Azure Container Registry (ACR) using a Docker Service Connection.&lt;/li&gt;
&lt;li&gt;I have used two tasks : &lt;code&gt;Bash@3&lt;/code&gt; for custom Docker build steps and &lt;code&gt;Docker@2&lt;/code&gt; for pushing the image to ACR.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Deploy Stage: VM-based Self-Hosted Deployment&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This stage connects to a self-hosted agent (Azure VM) configured as an Azure DevOps Environment Resource.&lt;/li&gt;
&lt;li&gt;It performs the following actions in sequence:

&lt;ul&gt;
&lt;li&gt;Creates a .env file dynamically on the VM with all sensitive configurations using Azure DevOps variables.&lt;/li&gt;
&lt;li&gt;Stops and removes any existing containers.&lt;/li&gt;
&lt;li&gt;Pulls the latest image from ACR.&lt;/li&gt;
&lt;li&gt;Runs the new container with the updated .env configuration.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>azure</category>
      <category>npm</category>
    </item>
  </channel>
</rss>
