<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Robert Scott</title>
    <description>The latest articles on Forem by Robert Scott (@robert_scott_339c35174a4d).</description>
    <link>https://forem.com/robert_scott_339c35174a4d</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/robert_scott_339c35174a4d"/>
    <language>en</language>
    <item>
      <title>Part 2 Networking the proper way!</title>
      <dc:creator>Robert Scott</dc:creator>
      <pubDate>Mon, 15 Sep 2025 03:10:38 +0000</pubDate>
      <link>https://forem.com/robert_scott_339c35174a4d/part-2-networking-the-proper-way-436c</link>
      <guid>https://forem.com/robert_scott_339c35174a4d/part-2-networking-the-proper-way-436c</guid>
      <description>&lt;h2&gt;
  
  
  From Networking Nightmare to Instant Success: Setting up Kubernetes with Tailscale
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Part 2 of my Kubernetes learning journey - sometimes the solution is simpler than you think&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Yesterday, I shared my &lt;a href="https://dev.tolink-to-previous-article"&gt;troubleshooting marathon&lt;/a&gt; trying to set up a two-node Kubernetes cluster between my home lab and an Azure VM. After hours of debugging network connectivity, container runtimes, and firewall rules, I made a decision that changed everything: &lt;strong&gt;start over with proper networking from day one&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem Recap
&lt;/h2&gt;

&lt;p&gt;My initial setup looked like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Control Plane&lt;/strong&gt;: Home tower at &lt;code&gt;192.168.1.244&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Worker Node&lt;/strong&gt;: Azure VM at &lt;code&gt;172.16.0.4&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Result&lt;/strong&gt;: Complete network isolation - they couldn't even ping each other&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The fundamental issue wasn't Kubernetes configuration - it was trying to bridge two completely different network environments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;My home network (&lt;code&gt;192.168.1.x&lt;/code&gt; subnet)&lt;/li&gt;
&lt;li&gt;Azure's virtual network (&lt;code&gt;172.16.0.x&lt;/code&gt; subnet)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even after opening firewall ports, configuring security groups, and debugging routing tables, the basic connectivity just wasn't there.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Lightbulb Moment: Tailscale
&lt;/h2&gt;

&lt;p&gt;Instead of continuing to fight network routing, I decided to embrace an overlay network solution. Tailscale had been running on my home tower already, but I'd been trying to work around it rather than leverage it.&lt;/p&gt;

&lt;p&gt;The plan became simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Tailscale on the Azure VM&lt;/li&gt;
&lt;li&gt;Use Tailscale IPs for all Kubernetes communication&lt;/li&gt;
&lt;li&gt;Let Tailscale handle the networking complexity&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setting up Tailscale on Azure VM
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Configure Network Security Group for Tailscale
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;This is crucial for Azure VMs&lt;/strong&gt; - Tailscale needs specific UDP ports open to establish connections. Before creating the VM, I added a Network Security Group rule:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rule&lt;/strong&gt;: Allow Tailscale UDP&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Port&lt;/strong&gt;: 41641 (Tailscale's default UDP port)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Protocol&lt;/strong&gt;: UDP&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source&lt;/strong&gt;: Any&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Destination&lt;/strong&gt;: Any&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Action&lt;/strong&gt;: Allow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without this security group rule, Tailscale can install but won't be able to establish connections through Azure's firewall.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create the VM with Proper Storage
&lt;/h3&gt;

&lt;p&gt;Next, I set up the Azure VM with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ubuntu 24.04 LTS&lt;/li&gt;
&lt;li&gt;Additional SSD for container storage (temporary storage is fine for learning)&lt;/li&gt;
&lt;li&gt;Default virtual network settings (since Tailscale overlays on top)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Applied the Tailscale security group&lt;/strong&gt; from Step 1
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Format the additional storage&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;fdisk /dev/sdb  &lt;span class="c"&gt;# Create partition&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;mkfs.ext4 /dev/sdb1  &lt;span class="c"&gt;# Format&lt;/span&gt;
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; /mnt/data  &lt;span class="c"&gt;# Mount point&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;mount /dev/sdb1 /mnt/data
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'/dev/sdb1 /mnt/data ext4 defaults 0 0'&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; /etc/fstab
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Install Tailscale with Pre-Auth
&lt;/h3&gt;

&lt;p&gt;Instead of the generic Tailscale installation, I used the pre-configured script from my Tailscale admin dashboard:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# SSH into the Azure VM&lt;/span&gt;
ssh username@vm-public-ip

&lt;span class="c"&gt;# Run the account-specific install script (from Tailscale dashboard)&lt;/span&gt;
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://tailscale.com/install.sh | sh &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nt"&gt;--authkey&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tskey-auth-xxxxx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The beauty of this approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No manual authentication&lt;/strong&gt; - the VM automatically joins my network&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Immediate connectivity&lt;/strong&gt; - shows up in the dashboard instantly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pre-configured&lt;/strong&gt; - includes any device tags or policies I've set&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Magic Moment
&lt;/h2&gt;

&lt;p&gt;After the installation completed, I ran the test that had been failing for hours the day before:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# From Azure VM, ping my home tower's Tailscale IP&lt;/span&gt;
ping 100.64.x.x  &lt;span class="c"&gt;# My tower's Tailscale IP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;INSTANTLY&lt;/strong&gt; - ping responses started flooding in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;64 bytes from 100.64.x.x: icmp_seq=1 ttl=64 time=12.3 ms
64 bytes from 100.64.x.x: icmp_seq=2 ttl=64 time=11.8 ms
64 bytes from 100.64.x.x: icmp_seq=3 ttl=64 time=12.1 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a day of network timeouts and connection failures, seeing those ping responses was absolutely magical. &lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Approach Works So Well
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Tailscale eliminates network complexity:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No subnet routing&lt;/strong&gt; - both machines appear on the same virtual network&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No firewall configuration&lt;/strong&gt; - Tailscale handles NAT traversal automatically
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No port forwarding&lt;/strong&gt; - direct machine-to-machine communication&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Encrypted by default&lt;/strong&gt; - secure communication across the internet&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Works anywhere&lt;/strong&gt; - home networks, cloud providers, mobile devices&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;For Kubernetes specifically:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Control plane and worker nodes can communicate directly&lt;/li&gt;
&lt;li&gt;No need to expose API server ports to the internet&lt;/li&gt;
&lt;li&gt;Pod-to-pod networking works seamlessly across locations&lt;/li&gt;
&lt;li&gt;Can easily add more nodes from anywhere&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Difference is Night and Day
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Before Tailscale (Local networking):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# From Azure VM&lt;/span&gt;
ping 192.168.1.244
&lt;span class="c"&gt;# Result: Network unreachable&lt;/span&gt;

telnet 192.168.1.244 6443
&lt;span class="c"&gt;# Result: Connection timed out&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;After Tailscale:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# From Azure VM  &lt;/span&gt;
ping 100.64.x.x
&lt;span class="c"&gt;# Result: Instant responses&lt;/span&gt;

telnet 100.64.x.x 6443  
&lt;span class="c"&gt;# Result: Connected immediately&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key Takeaways for Hybrid Kubernetes Clusters
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Don't fight the network&lt;/strong&gt; - use overlay solutions for cross-environment setups&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tailscale is perfect for hybrid clouds&lt;/strong&gt; - seamlessly connects on-premises and cloud resources&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Start with networking first&lt;/strong&gt; - get connectivity working before diving into Kubernetes configuration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pre-authenticated installation&lt;/strong&gt; - use account-specific scripts for automated setups&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sometimes starting over is fastest&lt;/strong&gt; - don't be afraid to rebuild with lessons learned&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Now that I have rock-solid networking between my home lab and Azure VM, I can focus on what I actually wanted to learn: &lt;strong&gt;Kubernetes cluster management&lt;/strong&gt;. Tomorrow I'll:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run &lt;code&gt;kubeadm init&lt;/code&gt; on my home tower using Tailscale IPs&lt;/li&gt;
&lt;li&gt;Join the Azure VM as a worker node (which should actually work this time!)&lt;/li&gt;
&lt;li&gt;Deploy some test applications across both nodes&lt;/li&gt;
&lt;li&gt;Explore pod scheduling, persistent volumes, and service discovery&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The best part? All of this will happen over secure, encrypted Tailscale connections without any additional network configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;Sometimes the solution isn't debugging the existing approach - it's stepping back and choosing a better tool for the job. Tailscale transformed my networking nightmare into a 5-minute setup with instant connectivity.&lt;/p&gt;

&lt;p&gt;If you're building hybrid Kubernetes clusters or just need to connect resources across different networks, don't fight with subnets and firewalls. Use Tailscale and focus on the problems you actually want to solve.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Next up: Actually setting up that Kubernetes cluster now that the machines can talk to each other! Stay tuned for Part 3 where we finally get to run those &lt;code&gt;kubectl&lt;/code&gt; commands.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tags
&lt;/h2&gt;

&lt;h1&gt;
  
  
  kubernetes #networking #tailscale #azure #hybridcloud #devops #infrastructure
&lt;/h1&gt;

</description>
      <category>tailscale</category>
      <category>kubernetes</category>
      <category>linux</category>
      <category>microsoft</category>
    </item>
    <item>
      <title>Kubernetes cluster marathon!</title>
      <dc:creator>Robert Scott</dc:creator>
      <pubDate>Sun, 14 Sep 2025 04:52:55 +0000</pubDate>
      <link>https://forem.com/robert_scott_339c35174a4d/kubernetes-cluster-marathon-2m70</link>
      <guid>https://forem.com/robert_scott_339c35174a4d/kubernetes-cluster-marathon-2m70</guid>
      <description>&lt;h2&gt;
  
  
  Setting up a Kubernetes Cluster on Ubuntu 24.04: A Troubleshooting Journey
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Or: How I learned that sometimes starting over is the best solution&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Setting up Kubernetes should be straightforward, right? Well, as I discovered today, reality has other plans. Here's my troubleshooting journey setting up a two-node Kubernetes cluster on Ubuntu 24.04, complete with all the roadblocks I hit and how to fix them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Initial Problem: Package Repository Issues
&lt;/h2&gt;

&lt;p&gt;My first hurdle came immediately when trying to install &lt;code&gt;kubectl&lt;/code&gt; and &lt;code&gt;kubelet&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;kubectl kubelet kubeadm
&lt;span class="c"&gt;# Error: couldn't find the programs kubectl and kubelet&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Fix: Updated Repository URLs
&lt;/h3&gt;

&lt;p&gt;The issue was that Google changed their package repository URLs in 2024, but many tutorials still reference the old &lt;code&gt;packages.cloud.google.com&lt;/code&gt; URLs. Here's the correct way for Ubuntu 24.04:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Remove any old repository entries&lt;/span&gt;
&lt;span class="nb"&gt;sudo rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; /etc/apt/sources.list.d/kubernetes.list
&lt;span class="nb"&gt;sudo rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; /etc/apt/keyrings/kubernetes-apt-keyring.gpg

&lt;span class="c"&gt;# Add the current official Kubernetes repository&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; apt-transport-https ca-certificates curl gpg

&lt;span class="c"&gt;# Add the official GPG key (note the updated URL)&lt;/span&gt;
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | &lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/kubernetes-apt-keyring.gpg

&lt;span class="c"&gt;# Add the repository&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /'&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/kubernetes.list

&lt;span class="c"&gt;# Update and install&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; kubectl kubelet kubeadm
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-mark hold kubelet kubeadm kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Problem #2: CRI Socket Confusion
&lt;/h2&gt;

&lt;p&gt;When running &lt;code&gt;kubeadm init&lt;/code&gt;, I got:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;error: define which one you wish to use by setting crisocket field for kubeadm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This happens when you have multiple container runtimes installed. Ubuntu 24.04 can have both Docker and containerd available.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Fix: Choose Your Container Runtime
&lt;/h3&gt;

&lt;p&gt;I chose containerd (recommended for modern Kubernetes):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install and configure containerd&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; containerd

&lt;span class="c"&gt;# Generate proper config with CRI enabled&lt;/span&gt;
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/containerd
containerd config default | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/containerd/config.toml
&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/SystemdCgroup = false/SystemdCgroup = true/g'&lt;/span&gt; /etc/containerd/config.toml

&lt;span class="c"&gt;# Make sure CRI plugin isn't disabled&lt;/span&gt;
&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/disabled_plugins = \["cri"\]/disabled_plugins = []/g'&lt;/span&gt; /etc/containerd/config.toml

&lt;span class="c"&gt;# Enable and start&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; containerd

&lt;span class="c"&gt;# Use explicit CRI socket&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm init &lt;span class="nt"&gt;--cri-socket&lt;/span&gt; unix:///var/run/containerd/containerd.sock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Problem #3: The CRI v1 Runtime API Error
&lt;/h2&gt;

&lt;p&gt;Even after installing containerd, I got:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;failed to create new CRI runtime service: validate service connection: validate CRI v1 runtime API for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Fix: Proper containerd Configuration
&lt;/h3&gt;

&lt;p&gt;The default containerd config sometimes has the CRI plugin disabled. The key is generating a proper config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Stop containerd&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl stop containerd

&lt;span class="c"&gt;# Remove bad config&lt;/span&gt;
&lt;span class="nb"&gt;sudo rm&lt;/span&gt; /etc/containerd/config.toml

&lt;span class="c"&gt;# Generate proper config&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;containerd config default | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/containerd/config.toml

&lt;span class="c"&gt;# Enable SystemdCgroup (required for Ubuntu 24.04)&lt;/span&gt;
&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/SystemdCgroup = false/SystemdCgroup = true/g'&lt;/span&gt; /etc/containerd/config.toml

&lt;span class="c"&gt;# Ensure CRI plugin is enabled&lt;/span&gt;
&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-A5&lt;/span&gt; &lt;span class="nt"&gt;-B5&lt;/span&gt; disabled_plugins /etc/containerd/config.toml

&lt;span class="c"&gt;# Restart&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start containerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;crictl version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Problem #4: Hostname Resolution Issues
&lt;/h2&gt;

&lt;p&gt;During &lt;code&gt;kubeadm init&lt;/code&gt;, I got errors about kubelet not being able to reach the hostname. Ubuntu 24.04 sets up hostname resolution with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;127.0.0.1&lt;/code&gt; for localhost
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;127.0.1.1&lt;/code&gt; for your hostname&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But &lt;code&gt;127.0.1.1&lt;/code&gt; isn't reachable from other machines!&lt;/p&gt;

&lt;h3&gt;
  
  
  The Fix: Use Your Real Network IP
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Find your actual network IP&lt;/span&gt;
ip route get 8.8.8.8 | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-oP&lt;/span&gt; &lt;span class="s1"&gt;'src \K\S+'&lt;/span&gt;

&lt;span class="c"&gt;# Update /etc/hosts to use real IP instead of 127.0.1.1&lt;/span&gt;
&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/127.0.1.1/192.168.1.244/'&lt;/span&gt; /etc/hosts

&lt;span class="c"&gt;# Initialize with your real IP&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm init &lt;span class="nt"&gt;--cri-socket&lt;/span&gt; unix:///var/run/containerd/containerd.sock &lt;span class="nt"&gt;--apiserver-advertise-address&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;192.168.1.244
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Problem #5: Port Already in Use
&lt;/h2&gt;

&lt;p&gt;Even after &lt;code&gt;kubeadm reset&lt;/code&gt;, I kept getting "port 6443 is in use" errors.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Fix: Thorough Cleanup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Reset with CRI socket specified&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm reset &lt;span class="nt"&gt;--force&lt;/span&gt; &lt;span class="nt"&gt;--cri-socket&lt;/span&gt; unix:///var/run/containerd/containerd.sock

&lt;span class="c"&gt;# Clean up everything&lt;/span&gt;
&lt;span class="nb"&gt;sudo rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /etc/kubernetes/
&lt;span class="nb"&gt;sudo rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/etcd/
&lt;span class="nb"&gt;sudo rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/kubelet/
&lt;span class="nb"&gt;sudo rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; ~/.kube/
&lt;span class="nb"&gt;sudo rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /etc/cni/net.d/

&lt;span class="c"&gt;# Reset iptables&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-F&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-t&lt;/span&gt; nat &lt;span class="nt"&gt;-F&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-t&lt;/span&gt; mangle &lt;span class="nt"&gt;-F&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;iptables &lt;span class="nt"&gt;-X&lt;/span&gt;

&lt;span class="c"&gt;# Kill any hanging processes&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;pkill &lt;span class="nt"&gt;-f&lt;/span&gt; kube-apiserver
&lt;span class="nb"&gt;sudo &lt;/span&gt;pkill &lt;span class="nt"&gt;-f&lt;/span&gt; etcd
&lt;span class="nb"&gt;sudo &lt;/span&gt;pkill &lt;span class="nt"&gt;-f&lt;/span&gt; kubelet

&lt;span class="c"&gt;# Restart services&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart containerd
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart kubelet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Problem #6: Worker Node Network Connectivity
&lt;/h2&gt;

&lt;p&gt;When trying to join my worker node, I got:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;error execution phase preflight: couldn't validate the identity of the API server - failed to request cluster info configmap: the client timed out waiting for headers
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The worker node simply couldn't reach the control plane, even though I was using the correct IP address.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Root Cause: Network Complexity
&lt;/h3&gt;

&lt;p&gt;This is where things got complicated. I had:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tailscale running on the control plane but not the worker&lt;/li&gt;
&lt;li&gt;Potential firewall issues&lt;/li&gt;
&lt;li&gt;VM networking complications&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Final Solution: Sometimes Starting Over Is Best
&lt;/h2&gt;

&lt;p&gt;After hours of debugging network connectivity, container runtime conflicts, and configuration issues, I realized something important: &lt;strong&gt;it's okay to start over&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Instead of continuing to debug a complex setup with multiple moving parts, the better approach was:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Start with a clean VM&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set up Tailscale first&lt;/strong&gt; (before any Kubernetes components)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Use a single container runtime from the beginning&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Use Tailscale IPs for all cluster communication&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This eliminates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network routing issues&lt;/li&gt;
&lt;li&gt;Firewall complications
&lt;/li&gt;
&lt;li&gt;IP address confusion&lt;/li&gt;
&lt;li&gt;Container runtime conflicts&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Google changed Kubernetes repository URLs in 2024&lt;/strong&gt; - use the new &lt;code&gt;pkgs.k8s.io&lt;/code&gt; URLs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ubuntu 24.04 needs SystemdCgroup enabled&lt;/strong&gt; for containerd&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Always specify the CRI socket&lt;/strong&gt; when you have multiple container runtimes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use your real network IP&lt;/strong&gt;, not &lt;code&gt;127.0.1.1&lt;/code&gt; for multi-node clusters
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Thorough cleanup is essential&lt;/strong&gt; when resetting kubeadm&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network connectivity issues are the hardest to debug&lt;/strong&gt; - consider using overlay networks like Tailscale from the start&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Starting over with a plan beats fixing a messy setup&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Most Important Lesson
&lt;/h2&gt;

&lt;p&gt;Don't feel bad about starting over! Kubernetes has a steep learning curve, and networking issues can be genuinely tricky even for experienced developers. Sometimes the fastest path to success is a clean slate with lessons learned.&lt;/p&gt;

&lt;p&gt;Getting the control plane running (which I did!) is actually the hardest part. The worker node join should be straightforward once the networking is sorted out properly.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have you faced similar Kubernetes setup challenges? What was your biggest hurdle? Share your experiences in the comments!&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tags
&lt;/h2&gt;

&lt;h1&gt;
  
  
  kubernetes #ubuntu #devops #troubleshooting #containerization #networking
&lt;/h1&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>docker</category>
      <category>containers</category>
    </item>
    <item>
      <title>The journey to complete the docker+kubernetes pair</title>
      <dc:creator>Robert Scott</dc:creator>
      <pubDate>Tue, 09 Sep 2025 00:13:07 +0000</pubDate>
      <link>https://forem.com/robert_scott_339c35174a4d/the-journey-to-complete-the-dockerkubernetes-pair-5h9g</link>
      <guid>https://forem.com/robert_scott_339c35174a4d/the-journey-to-complete-the-dockerkubernetes-pair-5h9g</guid>
      <description>&lt;p&gt;I've been using docker for a while. Tonight I downloaded kubernetes and launched mini kubernetes to mess around with it. I deployed a simple nginx node but sadly failed to get the port to work. &lt;br&gt;
  It's interesting because docker is my default go to for running any application now. Kubernetes on the other hand is definitely a new monster! Just like I had no idea what I was doing with docker, I'll get the hang of kubernetes!&lt;br&gt;
  I did successfully follow most of the documentation for kubernetes until I couldn't get anything to work. Luckily my buddy copilot saved me! The ability to learn with these different AI tools now is pretty remarkable.&lt;br&gt;
  At least tonight ive learned some kubectl commands and tried a deployment. I'll probably just go straight to hard mode and deploy my own container running on docker right now. Don't worry I'll let you know how it goes!&lt;br&gt;
Visit my LinkedIn!&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/robert-scott-74a093382?utm_source=share&amp;amp;utm_campaign=share_via&amp;amp;utm_content=profile&amp;amp;utm_medium=android_app" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/robert-scott-74a093382?utm_source=share&amp;amp;utm_campaign=share_via&amp;amp;utm_content=profile&amp;amp;utm_medium=android_app&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>kubernetes</category>
      <category>python</category>
    </item>
    <item>
      <title>DevOps from the Driver's seat part 1</title>
      <dc:creator>Robert Scott</dc:creator>
      <pubDate>Fri, 05 Sep 2025 00:27:00 +0000</pubDate>
      <link>https://forem.com/robert_scott_339c35174a4d/devops-from-the-drivers-seat-part-1-k60</link>
      <guid>https://forem.com/robert_scott_339c35174a4d/devops-from-the-drivers-seat-part-1-k60</guid>
      <description>&lt;p&gt;The hum of the diesel engine has been apart of my soundtrack for a decade. From a lumber yard to trash truck to chemicals, I've hauled it all. Somewhere between the long stretches of highway and the loading racks, I realized I was always chasing something else.&lt;br&gt;
   These days, "DevOps engineer" feels less like a job title and more like a philosophy, which fits me perfectly. I've spent ten years solving problems on the road, but I've always been drawn to building, optimizing, and figuring out how systems of all kinds work together.&lt;br&gt;
   Driving gives me hours to think and listen to tutorials on whatever catches my curiosity. One day, I stumbled across a video about home labs. Within minutes, I was hooked! My wife would say when i get hooked into something i go all in!&lt;br&gt;
   Soon I was comparing Ubuntu vs windows, bookmarking Docker guides, and sketching network configurations online while loading chemicals. I had no idea how far down this rabbit hole I'd go...or how much it would change my life. I'm sure most of you know how deep that hole can go! Within the first week after getting home i bought a USB drive and duel-booted my old gaming computer with Ubuntu 24.04. Now I'm more confident with CLI then running windows. I'm constantly trying to learn everything I can while leaving time to keep learning python.&lt;br&gt;
   The more I learned, the more i realized this wasn't just a hobby. It was the same problem-solving rush I'd felt on the road, but now i was building services that could grow, scale, and maybe even become a business. I started to see how this could lead to something more and pay me back all the years I've built up my vast array of skills. It's interesting to see how all of my non-tech related life experience's have prepped me for what's to come. &lt;br&gt;
More coming as my journey continues!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>homelab</category>
      <category>python</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
