<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Andre Nogueira</title>
    <description>The latest articles on Forem by Andre Nogueira (@aanogueira).</description>
    <link>https://forem.com/aanogueira</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/aanogueira"/>
    <language>en</language>
    <item>
      <title>Home Lab: Chapter 8 — Kubernetes Storage with Rook-Ceph</title>
      <dc:creator>Andre Nogueira</dc:creator>
      <pubDate>Wed, 05 Nov 2025 19:26:08 +0000</pubDate>
      <link>https://forem.com/aanogueira/home-lab-chapter-8-kubernetes-storage-with-rook-ceph-338</link>
      <guid>https://forem.com/aanogueira/home-lab-chapter-8-kubernetes-storage-with-rook-ceph-338</guid>
      <description>&lt;p&gt;Howdy!&lt;/p&gt;

&lt;p&gt;We've come a long way! We've set up our Kubernetes cluster, configured GitOps with ArgoCD, managed secrets securely, exposed applications through Ingress, and set up DNS with SSL certificates. But there's one critical piece we haven't addressed yet: &lt;strong&gt;storage&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this chapter, we'll tackle one of the most challenging aspects of running Kubernetes in a homelab environment - persistent storage. Specifically, we'll explore how I implemented a distributed storage solution using &lt;strong&gt;Rook-Ceph&lt;/strong&gt; to provide reliable, scalable block storage across my cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Storage Challenge
&lt;/h2&gt;

&lt;p&gt;When you run applications on Kubernetes, especially stateful ones like databases, message queues, or monitoring systems, you need persistent storage that survives pod restarts and node failures. Without it, losing a pod means losing all your data.&lt;/p&gt;

&lt;p&gt;In a cloud environment, this is straightforward - you just request a volume from your cloud provider. But in a homelab, you need to build this yourself.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Not Just Use Local Storage?
&lt;/h3&gt;

&lt;p&gt;You might think, "Can't I just mount a local directory on each node?" Technically yes, but there are serious drawbacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No redundancy&lt;/strong&gt; - If a node fails, your data is gone&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Poor availability&lt;/strong&gt; - Pods can't migrate between nodes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limited capacity&lt;/strong&gt; - Bound by individual node storage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manual management&lt;/strong&gt; - You have to handle backups yourself&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a homelab that aims to mimic production environments, this isn't acceptable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing Rook-Ceph
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Rook&lt;/strong&gt; is a cloud-native storage orchestrator that automates the deployment and management of storage systems in Kubernetes. &lt;strong&gt;Ceph&lt;/strong&gt; is a distributed storage platform that provides block storage, object storage, and file system storage.&lt;/p&gt;

&lt;p&gt;Together, Rook-Ceph gives you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Distributed storage&lt;/strong&gt; - Data replicated across multiple nodes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-healing&lt;/strong&gt; - Automatic recovery from node failures&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High availability&lt;/strong&gt; - Pods can migrate freely&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt; - Add nodes to expand storage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Production-ready&lt;/strong&gt; - Used by enterprises worldwide&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Perfect Fit for Homelabs
&lt;/h3&gt;

&lt;p&gt;Rook-Ceph is particularly well-suited for homelab environments because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It uses node local disks, so no external storage appliances needed&lt;/li&gt;
&lt;li&gt;It's open-source and free&lt;/li&gt;
&lt;li&gt;It's battle-tested in production&lt;/li&gt;
&lt;li&gt;It manages itself using Kubernetes native resources&lt;/li&gt;
&lt;li&gt;It provides excellent observability and dashboards&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;p&gt;Now that we understand why Rook-Ceph is a great fit, let's dive into how I implemented it in my homelab. I'll walk through the deployment strategy, cluster configuration, storage classes, and some key design decisions that make this setup reliable and scalable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment Strategy
&lt;/h3&gt;

&lt;p&gt;In my setup, I deployed Rook-Ceph using Kustomize through ArgoCD (as we configured in Chapter 4). This ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Infrastructure-as-code approach&lt;/li&gt;
&lt;li&gt;Automated deployments&lt;/li&gt;
&lt;li&gt;Easy reproducibility&lt;/li&gt;
&lt;li&gt;Version control of all configurations&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cluster Configuration
&lt;/h3&gt;

&lt;p&gt;Here's the core Rook-Ceph cluster setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ceph.rook.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CephCluster&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rook-ceph&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rook-ceph&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;cephVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quay.io/ceph/ceph:v18.2.2&lt;/span&gt;
  &lt;span class="na"&gt;dataDirHostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/rook&lt;/span&gt;
  &lt;span class="na"&gt;mon&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;count&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;              &lt;span class="c1"&gt;# 3 monitors for quorum&lt;/span&gt;
    &lt;span class="na"&gt;allowMultiplePerNode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;dashboard&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;        &lt;span class="c1"&gt;# Web dashboard for monitoring&lt;/span&gt;
  &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;useAllNodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;    &lt;span class="c1"&gt;# Use all nodes in cluster&lt;/span&gt;
    &lt;span class="na"&gt;useAllDevices&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;  &lt;span class="c1"&gt;# Use all available disks&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Storage Classes
&lt;/h3&gt;

&lt;p&gt;I configured a &lt;strong&gt;StorageClass&lt;/strong&gt; to define how storage should be provisioned:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;storage.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;StorageClass&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rook-ceph-block&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;storageclass.kubernetes.io/is-default-class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
&lt;span class="na"&gt;provisioner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rook-ceph.rbd.csi.ceph.com&lt;/span&gt;
&lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;clusterID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rook-ceph&lt;/span&gt;
  &lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replicapool&lt;/span&gt;
  &lt;span class="na"&gt;imageFormat&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2"&lt;/span&gt;
  &lt;span class="na"&gt;imageFeatures&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;layering,fast-diff,object-map,deep-flatten,exclusive-lock&lt;/span&gt;
  &lt;span class="na"&gt;csi.storage.k8s.io/fstype&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;xfs&lt;/span&gt;
&lt;span class="na"&gt;reclaimPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Retain&lt;/span&gt;      &lt;span class="c1"&gt;# Keep volumes after deletion&lt;/span&gt;
&lt;span class="na"&gt;allowVolumeExpansion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="c1"&gt;# Scale volumes on demand&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Key Design Decisions
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Monitor Count (3)&lt;/strong&gt;: Ceph requires a quorum. With 3 monitors, the cluster tolerates 1 failure. Given my 3-node setup, one monitor per node is ideal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use All Nodes&lt;/strong&gt;: This ensures distributed storage across the entire cluster, maximizing redundancy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use All Devices&lt;/strong&gt;: Any available disk on any node becomes part of the Ceph cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retain Reclaim Policy&lt;/strong&gt;: When a PVC is deleted, the underlying volume is retained (not deleted), providing data safety.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;XFS Filesystem&lt;/strong&gt;: More performant and reliable than ext4 for this use case.&lt;/p&gt;

&lt;h2&gt;
  
  
  Object Storage
&lt;/h2&gt;

&lt;p&gt;Beyond block storage, Rook-Ceph also provides &lt;strong&gt;Object Storage&lt;/strong&gt; (S3-compatible) through its Radosgw component:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ceph.rook.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CephObjectStore&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ceph-objectstore&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rook-ceph&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;metadataPool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;failureDomain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;host&lt;/span&gt;
    &lt;span class="na"&gt;replicated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;dataPool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;failureDomain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;host&lt;/span&gt;
    &lt;span class="na"&gt;erasureCoded&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;dataChunks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
      &lt;span class="na"&gt;codingChunks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;gateway&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;instances&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This enables me to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Back up applications to S3-compatible storage&lt;/li&gt;
&lt;li&gt;Host private registries&lt;/li&gt;
&lt;li&gt;Create self-hosted object storage alternatives to AWS S3&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Monitoring and Operations
&lt;/h2&gt;

&lt;p&gt;The Rook-Ceph dashboard provides visibility into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cluster health and status&lt;/li&gt;
&lt;li&gt;Capacity and usage metrics&lt;/li&gt;
&lt;li&gt;OSD (storage) performance&lt;/li&gt;
&lt;li&gt;Pool configurations&lt;/li&gt;
&lt;li&gt;Real-time alerts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Accessing it is straightforward through port-forwarding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward &lt;span class="nt"&gt;-n&lt;/span&gt; rook-ceph svc/rook-ceph-mgr-dashboard 7000:7000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Real-World Usage
&lt;/h2&gt;

&lt;p&gt;With Rook-Ceph in place, provisioning storage for applications is trivial:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-database-pvc&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rook-ceph-block&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10Gi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Applications request storage, Rook automatically provisions it across the cluster, and data is protected through replication. The complexity is hidden, the benefits are clear.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With Rook-Ceph in place, my homelab now has a &lt;strong&gt;production-grade distributed storage system&lt;/strong&gt;. Applications no longer need to worry about node failures - storage is replicated, self-healing, and highly available.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Complete Foundation
&lt;/h3&gt;

&lt;p&gt;This chapter marks the &lt;strong&gt;completion of the foundational Kubernetes setup&lt;/strong&gt;. Over these 8 chapters, we've built all the bare bones infrastructure needed to run applications reliably:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hardware &amp;amp; Network&lt;/strong&gt; (Ch. 1) - The physical foundation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Base Infrastructure&lt;/strong&gt; (Ch. 2) - OS, networking, security&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes Cluster&lt;/strong&gt; (Ch. 3) - Orchestration platform&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitOps (ArgoCD)&lt;/strong&gt; (Ch. 4) - Automated deployments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secrets Management&lt;/strong&gt; (Ch. 5) - Secure configurations
✅ &lt;strong&gt;Ingress &amp;amp; Load Balancing&lt;/strong&gt; (Ch. 6) - External access
✅ &lt;strong&gt;DNS &amp;amp; SSL&lt;/strong&gt; (Ch. 7) - Domain names and encryption
✅ &lt;strong&gt;Distributed Storage&lt;/strong&gt; (Ch. 8) - Persistent data&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What This Enables
&lt;/h3&gt;

&lt;p&gt;With this foundation in place, you can now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy stateful applications with confidence&lt;/li&gt;
&lt;li&gt;Know they'll survive node failures&lt;/li&gt;
&lt;li&gt;Scale storage by adding nodes&lt;/li&gt;
&lt;li&gt;Update applications safely with zero downtime&lt;/li&gt;
&lt;li&gt;Manage secrets securely&lt;/li&gt;
&lt;li&gt;Access services via stable domain names with valid certificates&lt;/li&gt;
&lt;li&gt;Automate everything through version control&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is genuinely &lt;strong&gt;production-grade infrastructure&lt;/strong&gt; - the kind you'd see in enterprise environments, but tailored for a homelab.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's Next?
&lt;/h3&gt;

&lt;p&gt;From here, the real fun begins. In future quests, we'll explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Running actual applications (databases, message queues, cache layers)&lt;/li&gt;
&lt;li&gt;Monitoring and observability (metrics, logs, alerts)&lt;/li&gt;
&lt;li&gt;CI/CD pipelines (automated testing and deployments)&lt;/li&gt;
&lt;li&gt;Backup strategies and disaster recovery&lt;/li&gt;
&lt;li&gt;Advanced networking and service mesh concepts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But for now, we have a &lt;strong&gt;solid, resilient, production-ready platform&lt;/strong&gt;. Every component we've built is battle-tested, scalable, and self-healing. That's something to be proud of. 🎉&lt;/p&gt;




&lt;p&gt;Originally published at &lt;a href="https://techquests.dev" rel="noopener noreferrer"&gt;https://techquests.dev&lt;/a&gt; on November 5, 2025.&lt;/p&gt;

</description>
      <category>homelab</category>
      <category>selfhost</category>
      <category>cloud</category>
      <category>platfrom</category>
    </item>
    <item>
      <title>Home Lab: Chapter 7 — Kubernetes DNS and SSL</title>
      <dc:creator>Andre Nogueira</dc:creator>
      <pubDate>Fri, 03 Oct 2025 18:25:38 +0000</pubDate>
      <link>https://forem.com/aanogueira/home-lab-chapter-7-kubernetes-dns-and-ssl-19ac</link>
      <guid>https://forem.com/aanogueira/home-lab-chapter-7-kubernetes-dns-and-ssl-19ac</guid>
      <description>&lt;p&gt;Howdy,&lt;/p&gt;

&lt;p&gt;Our environment is starting to take shape. We have a Kubernetes cluster up and running, an Ingress Controller managing external access to our services, and a way to handle secrets. The next step is making sure our services are accessible from the outside world. To do this, we need to configure DNS and SSL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting a Domain
&lt;/h2&gt;

&lt;p&gt;Before configuring DNS, you need a domain name to access your services. If you don't have one yet, you can register a domain with any registrar you prefer. Popular options include &lt;a href="https://www.namecheap.com/" rel="noopener noreferrer"&gt;Namecheap&lt;/a&gt;, &lt;a href="https://www.godaddy.com/" rel="noopener noreferrer"&gt;GoDaddy&lt;/a&gt;, or &lt;a href="https://domains.squarespace.com/" rel="noopener noreferrer"&gt;Google Domains&lt;/a&gt; - previously known as &lt;a href="https://domains.google/" rel="noopener noreferrer"&gt;Google Domains&lt;/a&gt;. What's important is that you have full access to manage the DNS records for that domain.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Choose a domain that's easy to remember and type. Trust me - you'll thank yourself later when testing and sharing URLs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;While it's not strictly required, I recommend using a domain you own rather than an internal-only domain. Why? Because issuing valid SSL certificates for your own domain is straightforward, allowing you to access your services over HTTPS without headaches.&lt;/p&gt;

&lt;p&gt;If you decide to stick with an internal domain, you'll need to use a self-signed certificate or one issued by a private certificate authority (CA). This works fine for internal use, but accessing the services externally can be tricky. Browsers won't trust your private CA by default, so you'll see warnings unless you install your root CA certificate on your system or browser. Using a domain you own avoids this hassle entirely.&lt;/p&gt;

&lt;h2&gt;
  
  
  DNS and SSL
&lt;/h2&gt;

&lt;p&gt;DNS (Domain Name System) is what translates human-readable domain names into IP addresses, allowing us to access websites and services without memorizing numbers. We've touched on DNS before, but it's worth revisiting since it plays a crucial role in exposing our services. In a previous chapter, we set up a DNS server to resolve some of our internal services - mostly infrastructure-related. Now, we want to extend DNS to resolve the domain names for services that will be accessible both externally and internally.&lt;/p&gt;

&lt;p&gt;Because we have two different scenarios, we'll need two DNS setups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Internal-facing applications&lt;/strong&gt; - accessible only within our network.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Public-facing applications&lt;/strong&gt; - accessible from the internet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To keep things simple, I'll use two separate DNS servers for these scenarios. One server will manage public records, and the other will manage internal records. This isn't strictly required - we could use a single DNS server for both - but separating them helps avoid conflicts and keeps things organized.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Some configuration details will vary depending on the DNS solution you choose. In this &lt;em&gt;guide&lt;/em&gt;, I'll be using Bind9 for internal-facing applications and Cloudflare for public-facing applications. You can pick whichever DNS servers you prefer, as long as you can manage both internal and external records without conflicts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Below is a high-level overview of the DNS and SSL setup for our Homelab:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6argtxrcuhlqycu7joav.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6argtxrcuhlqycu7joav.png" alt="Kubernetes DNS and SSL" width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Internal facing DNS records
&lt;/h2&gt;

&lt;p&gt;For internal-facing applications, we'll be using Bind9, an open-source authoritative DNS server. This setup allows us to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Host internal DNS records for services accessible only within our network (e.g., &lt;code&gt;nginx.&amp;lt;INTERNAL_DOMAIN&amp;gt;&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Resolve public domains by forwarding requests to external resolvers such as &lt;code&gt;1.1.1.1&lt;/code&gt; (Cloudflare) or &lt;code&gt;8.8.8.8&lt;/code&gt; (Google).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By combining Bind9 with Unbound as a forwarder, Bind9 becomes our primary internal DNS server capable of resolving both internal and external domains.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bind9 Setup
&lt;/h3&gt;

&lt;p&gt;We can install Bind9 using the official Helm chart and manage it via GitOps with ArgoCD. Here's an example &lt;code&gt;bind9.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# bind9.yaml&lt;/span&gt;
&lt;span class="c1"&gt;# ...&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/johanneskastl/helm-charts.git&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bind9-0.5.1&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;charts/bind9&lt;/span&gt;
    &lt;span class="na"&gt;helm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;valuesObject&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;internetsystemsconsortium/bind9&lt;/span&gt;
          &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9.21"&lt;/span&gt; &lt;span class="c1"&gt;# 9.19 is not available&lt;/span&gt;
        &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;dns-udp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
            &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;dns-udp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;nodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30053&lt;/span&gt;
        &lt;span class="na"&gt;chartMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;authoritative&lt;/span&gt;
        &lt;span class="na"&gt;persistence&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;bind9namedconf&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bind9-named-config&lt;/span&gt;
          &lt;span class="na"&gt;bind9userconfiguration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bind9-config&lt;/span&gt;
&lt;span class="c1"&gt;# ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses the official &lt;code&gt;internetsystemsconsortium/bind9&lt;/code&gt; image.&lt;/li&gt;
&lt;li&gt;Exposes DNS on port &lt;code&gt;30053&lt;/code&gt; via a &lt;code&gt;NodePort&lt;/code&gt; service - this allows external access to the DNS server.&lt;/li&gt;
&lt;li&gt;Persists configuration files so that data is not lost when the pod restarts.&lt;/li&gt;
&lt;li&gt;Sets up Bind9 in &lt;code&gt;authoritative&lt;/code&gt; mode, meaning it will manage DNS records for our internal domain.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Bind9 Configuration
&lt;/h3&gt;

&lt;p&gt;We define the zones and DNS records using a &lt;strong&gt;named configuration&lt;/strong&gt;. A named configuration specifies the zones for which the Bind9 server is authoritative and the associated records.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# bind9-config.yaml&lt;/span&gt;
&lt;span class="na"&gt;named.conf.local&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;key "tsig-key" {&lt;/span&gt;
        &lt;span class="s"&gt;algorithm hmac-sha512;&lt;/span&gt;
        &lt;span class="s"&gt;secret "&amp;lt;SECRET&amp;gt;";&lt;/span&gt;
    &lt;span class="s"&gt;};&lt;/span&gt;
    &lt;span class="s"&gt;zone "&amp;lt;INTERNAL_DOMAIN&amp;gt;" in {&lt;/span&gt;
        &lt;span class="s"&gt;type master;&lt;/span&gt;
        &lt;span class="s"&gt;file "/named_config/&amp;lt;INTERNAL_DOMAIN&amp;gt;.zone";&lt;/span&gt;
        &lt;span class="s"&gt;journal "/config/&amp;lt;INTERNAL_DOMAIN&amp;gt;.zone.jnl";&lt;/span&gt;
        &lt;span class="s"&gt;notify no;&lt;/span&gt;
        &lt;span class="s"&gt;allow-transfer {&lt;/span&gt;
            &lt;span class="s"&gt;key "tsig-key";&lt;/span&gt;
        &lt;span class="s"&gt;};&lt;/span&gt;
        &lt;span class="s"&gt;update-policy {&lt;/span&gt;
            &lt;span class="s"&gt;grant tsig-key zonesub ANY;&lt;/span&gt;
        &lt;span class="s"&gt;};&lt;/span&gt;
    &lt;span class="s"&gt;};&lt;/span&gt;
  &lt;span class="na"&gt;&amp;lt;INTERNAL_DOMAIN&amp;gt;.zone&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;$TTL 3600 ; 1 hour&lt;/span&gt;
    &lt;span class="s"&gt;@   IN SOA  &amp;lt;INTERNAL_DOMAIN&amp;gt;. &amp;lt;EMAIL&amp;gt;. (&lt;/span&gt;
                  &lt;span class="s"&gt;2025040601 ; serial&lt;/span&gt;
                  &lt;span class="s"&gt;43200      ; refresh (12 hours)&lt;/span&gt;
                  &lt;span class="s"&gt;3600       ; retry (1 hour)&lt;/span&gt;
                  &lt;span class="s"&gt;604800     ; expire (1 week)&lt;/span&gt;
                  &lt;span class="s"&gt;3600       ; minimum (1 hour)&lt;/span&gt;
                &lt;span class="s"&gt;)&lt;/span&gt;
        &lt;span class="s"&gt;IN NS     ns.&amp;lt;INTERNAL_DOMAIN&amp;gt;.&lt;/span&gt;
    &lt;span class="s"&gt;ns  IN A      x.x.x.105&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explanation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;TSIG key&lt;/strong&gt;: Stands for &lt;strong&gt;Transaction Signature&lt;/strong&gt; - it is used to authenticate and secure dynamic updates to the zone without exposing the server publicly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SOA record&lt;/strong&gt;: Defines the authoritative server and key timing parameters for DNS propagation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NS record&lt;/strong&gt;: Defines the name server for the zone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A record&lt;/strong&gt;: Points the name server to the Bind9 server's IP (&lt;code&gt;x.x.x.105&lt;/code&gt; - cluster &lt;strong&gt;VIP&lt;/strong&gt; address).&lt;/li&gt;
&lt;li&gt;Replace &lt;code&gt;&amp;lt;INTERNAL_DOMAIN&amp;gt;&lt;/code&gt; with your internal domain and &lt;code&gt;&amp;lt;EMAIL&amp;gt;&lt;/code&gt; with the administrator email. Increment the serial number (&lt;code&gt;2025040601&lt;/code&gt;) on every update.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We define global options for Bind9 in a separate configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# bind9-named-config.yaml&lt;/span&gt;
&lt;span class="na"&gt;named.conf&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;options {&lt;/span&gt;
      &lt;span class="s"&gt;directory "/var/cache/bind";&lt;/span&gt;

      &lt;span class="s"&gt;dnssec-validation auto;&lt;/span&gt;
      &lt;span class="s"&gt;listen-on port 5053 { any; };&lt;/span&gt;
      &lt;span class="s"&gt;listen-on-v6 port 5053 { any; };&lt;/span&gt;
      &lt;span class="s"&gt;recursion no;&lt;/span&gt;
      &lt;span class="s"&gt;allow-query { any; };&lt;/span&gt;

      &lt;span class="s"&gt;querylog no;&lt;/span&gt;

    &lt;span class="s"&gt;};&lt;/span&gt;
    &lt;span class="s"&gt;include "/named_config/named.conf.local";&lt;/span&gt;

    &lt;span class="s"&gt;// No default zones configured.&lt;/span&gt;
    &lt;span class="s"&gt;// This server is authoritative-only.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explanation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;directory&lt;/strong&gt;: Location for cache files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;dnssec-validation auto&lt;/strong&gt;: Verifies authenticity of external DNS records.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;recursion no&lt;/strong&gt;: Server does not perform recursive lookups - it only serves authoritative zones.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;allow-query - any&lt;/strong&gt;: Accept queries from any IP.&lt;/li&gt;
&lt;li&gt;Includes the &lt;code&gt;named.conf.local&lt;/code&gt; for zone definitions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After creating &lt;code&gt;bind9.yaml&lt;/code&gt;, &lt;code&gt;bind9-named-config.yaml&lt;/code&gt;, and &lt;code&gt;&amp;lt;INTERNAL_DOMAIN&amp;gt;.zone&lt;/code&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Push the files to your Git repository.&lt;/li&gt;
&lt;li&gt;ArgoCD will detect changes and deploy Bind9 with the defined configuration.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This setup ensures your internal DNS is authoritative, secure, and persistent, and supports dynamic updates for internal-facing applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Record Creation
&lt;/h3&gt;

&lt;p&gt;With the DNS server up and running, we can now start adding records using ExternalDNS.&lt;/p&gt;

&lt;p&gt;ExternalDNS is a Kubernetes controller that automatically manages DNS records for cluster resources such as Services, Ingresses, and more. By adding the &lt;code&gt;external-dns.alpha.kubernetes.io/hostname&lt;/code&gt; annotation to a Kubernetes resource, ExternalDNS can dynamically create or update the corresponding DNS record. It supports multiple DNS providers, including &lt;strong&gt;Cloudflare&lt;/strong&gt;, &lt;strong&gt;AWS Route 53&lt;/strong&gt;, &lt;strong&gt;Google Cloud DNS&lt;/strong&gt;, and - most relevant to us - &lt;a href="https://datatracker.ietf.org/doc/html/rfc2136" rel="noopener noreferrer"&gt;rfc2136&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RFC2136&lt;/strong&gt; is a DNS update protocol supported by Bind9, which allows us to update DNS records dynamically. With this, ExternalDNS can manage Bind9 records automatically.&lt;/p&gt;

&lt;p&gt;To install ExternalDNS, we can use the official Helm chart. For GitOps-based installation via ArgoCD, create an &lt;code&gt;Application&lt;/code&gt; object in an &lt;code&gt;external-dns.yaml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# external-dns.yaml&lt;/span&gt;
&lt;span class="c1"&gt;# ...&lt;/span&gt;
    &lt;span class="na"&gt;chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-dns&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://charts.bitnami.com/bitnami&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;6.7.2&lt;/span&gt;
    &lt;span class="na"&gt;helm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;valuesObject&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rfc2136&lt;/span&gt;
        &lt;span class="na"&gt;regexDomainFilter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;INTERNAL_DOMAIN&amp;gt;&lt;/span&gt;
        &lt;span class="na"&gt;rfc2136&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dns-bind9-dns-tcp.dns.svc.cluster.local&lt;/span&gt;
          &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;53&lt;/span&gt;
          &lt;span class="na"&gt;zone&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;INTERNAL_DOMAIN&amp;gt;&lt;/span&gt;
          &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-dns-tsig-key&lt;/span&gt;
          &lt;span class="na"&gt;tsigKeyname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tsig-key&lt;/span&gt;
          &lt;span class="na"&gt;tsigSecretAlg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hmac-sha512&lt;/span&gt;
          &lt;span class="na"&gt;tsigAxfr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="c1"&gt;# ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, the provider is set to &lt;code&gt;rfc2136&lt;/code&gt;, pointing to our Bind9 service. The &lt;code&gt;zone&lt;/code&gt; is the domain we want to manage, and &lt;strong&gt;TSIG&lt;/strong&gt; keys are used for secure updates.&lt;/p&gt;

&lt;p&gt;The keys can be generated using the &lt;code&gt;tsig-keygen&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;tsig-keygen &lt;span class="nt"&gt;-a&lt;/span&gt; hmac-sha512 tsig-key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output will be a key in the format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;key&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tsig-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;algorithm&lt;/span&gt; &lt;span class="n"&gt;hmac&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;sha512&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;secret&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;C4cYZr0v8IL2l58k0QZtyHd1hMqAbbUOTrZ9I/4WwjIJhkFX3x06BPiRZPXx/Iu76FEy/GzOnMYzPi40CfZ+PQ==&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then grab the secret and store it in a Kubernetes secret, &lt;code&gt;external-dns-tsig-key&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# external-dns-tsig-key.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-dns-tsig-key&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dns&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rfc2136_tsig_secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;C4cYZr0v8IL2l58k0QZtyHd1hMqAbbUOTrZ9I/4WwjIJhkFX3x06BPiRZPXx/Iu76FEy/GzOnMYzPi40CfZ+PQ==&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  TLS Certificates
&lt;/h3&gt;

&lt;p&gt;With DNS in place, we also need secure HTTPS access for our applications. Enter &lt;strong&gt;Cert Manager&lt;/strong&gt; - a Kubernetes controller that automates TLS certificate issuance and renewal. Cert Manager supports multiple issuers, including Let's Encrypt, which we'll use.&lt;/p&gt;

&lt;p&gt;Install Cert Manager via Helm and GitOps with an &lt;code&gt;Application&lt;/code&gt; object in &lt;code&gt;cert-manager.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# cert-manager.yaml&lt;/span&gt;
&lt;span class="c1"&gt;# ...&lt;/span&gt;
    &lt;span class="na"&gt;chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cert-manager&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://charts.jetstack.io&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.15.1&lt;/span&gt;
    &lt;span class="na"&gt;helm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;valuesObject&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;installCRDs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;extraArgs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--dns01-recursive-nameservers-only&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--dns01-recursive-nameservers=1.1.1.1:53&lt;/span&gt;
&lt;span class="c1"&gt;# ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This installs the necessary CRDs for certificate management and configures DNS01 challenges to work with recursive nameservers.&lt;/p&gt;

&lt;p&gt;Next, create a &lt;code&gt;ClusterIssuer&lt;/code&gt; for Let's Encrypt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cert-manager.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIssuer&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;acme&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://acme-v02.api.letsencrypt.org/directory&lt;/span&gt;
    &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;EMAIL&amp;gt;"&lt;/span&gt;
    &lt;span class="na"&gt;privateKeySecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt-production&lt;/span&gt;
    &lt;span class="na"&gt;solvers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;dns01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;cloudflare&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;apiTokenSecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cert-manager-cf-api-token&lt;/span&gt;
              &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;token&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This config allows Cert Manager to issue certificates for our internal apps using DNS01 challenges via Cloudflare.&lt;/p&gt;

&lt;p&gt;Certificates can then be requested by annotating an Ingress resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Testing the Internal Setup
&lt;/h3&gt;

&lt;p&gt;To test, deploy a simple &lt;code&gt;nginx&lt;/code&gt; application with an Ingress:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# nginx-internal-test.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-internal&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-internal&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-internal&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-internal&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-internal&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-internal&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;external-dns.alpha.kubernetes.io/hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx.&lt;/span&gt;
    &lt;span class="na"&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx.&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
            &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
            &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-internal&lt;/span&gt;
                &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;nginx.&lt;/span&gt;
        &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-tls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Apply the configuration:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Apply the configuration&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nginx-internal-test.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Once applied, the following should happen:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The &lt;code&gt;nginx-internal&lt;/code&gt; &lt;strong&gt;Deployment&lt;/strong&gt; will be created and the pod will start running.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;nginx-internal&lt;/code&gt; &lt;strong&gt;Service&lt;/strong&gt; will be created, exposing the pod on port &lt;code&gt;80&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;nginx-internal&lt;/code&gt; &lt;strong&gt;Ingress&lt;/strong&gt; resource will be created, and the hostname &lt;code&gt;nginx.&amp;lt;INTERNAL_DOMAIN&amp;gt;&lt;/code&gt; will be managed by ExternalDNS.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The &lt;code&gt;letsencrypt&lt;/code&gt; &lt;strong&gt;ClusterIssuer&lt;/strong&gt; will be used to issue a TLS certificate for the hostname &lt;code&gt;nginx.&amp;lt;INTERNAL_DOMAIN&amp;gt;&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We can check the status of the Ingress resource with:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;## Check the status of the Ingress resource&lt;/span&gt;
kubectl get ingress nginx &lt;span class="nt"&gt;-n&lt;/span&gt; default
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This should show the hostname and the TLS certificate that was issued. If everything is working correctly, we should be able to access the &lt;code&gt;nginx&lt;/code&gt; application using &lt;code&gt;https://nginx.&amp;lt;INTERNAL_DOMAIN&amp;gt;&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;ExternalDNS&lt;/strong&gt; controller will automatically create a DNS record for &lt;code&gt;nginx.&amp;lt;INTERNAL_DOMAIN&amp;gt;&lt;/code&gt; in Bind9, pointing to the IP of the Ingress Controller.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;Bind9 server&lt;/strong&gt; will be able to resolve the hostname &lt;code&gt;nginx.&amp;lt;INTERNAL_DOMAIN&amp;gt;&lt;/code&gt; to the IP of the Ingress Controller, allowing access from the internal network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The &lt;strong&gt;TLS certificate&lt;/strong&gt; will be issued by Let's Encrypt and will be valid for &lt;code&gt;nginx.&amp;lt;INTERNAL_DOMAIN&amp;gt;&lt;/code&gt;. This allows HTTPS access without browser warnings.&lt;/p&gt;

&lt;p&gt;Check the status of the TLS certificate:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;## Check the status of the TLS certificate&lt;/span&gt;
kubectl get certificate nginx-tls &lt;span class="nt"&gt;-n&lt;/span&gt; default
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This should display the certificate's status and expiration date. If everything is working correctly, the certificate should be valid for the next few months.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After issuance, a new secret named &lt;code&gt;nginx-tls&lt;/code&gt; will be created, containing the TLS certificate and private key. The Ingress Controller will use this secret to terminate TLS connections.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;Ingress&lt;/strong&gt; resource will automatically use the &lt;code&gt;nginx-tls&lt;/code&gt; secret for TLS termination.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After completing these steps, the &lt;code&gt;nginx&lt;/code&gt; application should be accessible over HTTPS at &lt;code&gt;&amp;lt;https://nginx&amp;gt;.&amp;lt;INTERNAL_DOMAIN&amp;gt;&lt;/code&gt; without warnings.&lt;/p&gt;

&lt;p&gt;Test the application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Test the application&lt;/span&gt;
curl &lt;span class="s2"&gt;"https://nginx.&amp;lt;INTERNAL_DOMAIN&amp;gt;"&lt;/span&gt; @x.x.x.101:30053
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Since the Bind9 service is exposed on port &lt;code&gt;30053&lt;/code&gt; across three nodes, you can use any node for testing.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Unbound Forwarder
&lt;/h3&gt;

&lt;p&gt;As a final step, we can configure Unbound to forward DNS queries to our Bind9 server. This allows Unbound to act as a DNS resolver for the internal network while still resolving public domain names normally.&lt;/p&gt;

&lt;p&gt;To configure this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;strong&gt;OpnSense Interface&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Navigate to &lt;code&gt;Services -&amp;gt; Unbound DNS -&amp;gt; Query Forwarding&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Add a new forwarding entry with the following settings:

&lt;ul&gt;
&lt;li&gt;Domain: &lt;code&gt;&amp;lt;INTERNAL_DOMAIN&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Forward IP: &lt;code&gt;x.x.x.101&lt;/code&gt; (select the node you want to forward queries to)&lt;/li&gt;
&lt;li&gt;Port: &lt;code&gt;30053&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Save and apply the changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once applied:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Any DNS query for &lt;code&gt;&amp;lt;SERVICE&amp;gt;.&amp;lt;INTERNAL_DOMAIN&amp;gt;&lt;/code&gt; will be forwarded by Unbound to the Bind9 server.&lt;/li&gt;
&lt;li&gt;Bind9 will respond with the internal record from its authoritative zone if it exists.&lt;/li&gt;
&lt;li&gt;Public domains will continue to be resolved via Unbound's configured upstream resolvers (e.g., &lt;code&gt;1.1.1.1&lt;/code&gt;, &lt;code&gt;8.8.8.8&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;This setup ensures that internal-facing applications are accessible from anywhere inside the network using their internal hostnames.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can verify this by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Resolve an internal application using Unbound&lt;/span&gt;
dig nginx.&amp;lt;INTERNAL_DOMAIN&amp;gt; @&amp;lt;FIREWALL_IP&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;; &amp;amp;lt;&amp;amp;lt;&amp;amp;gt;&amp;amp;gt; DiG 9.10.6 &amp;amp;lt;&amp;amp;lt;&amp;amp;gt;&amp;amp;gt; nginx. @
;; global options: +cmd
;; Got answer:
;; -&amp;amp;gt;&amp;amp;gt;HEADER&amp;amp;lt;&amp;amp;lt;- opcode: QUERY, status: NOERROR, id: 23690
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;nginx.. IN   A

;; ANSWER SECTION:
nginx.. 0 IN  A       x.x.x.105

;; Query time: 15 msec
;; SERVER: #53()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The result should return the &lt;strong&gt;internal IP address&lt;/strong&gt; of the Ingress Controller as provided by Bind9.&lt;/p&gt;

&lt;p&gt;We now should be able to access our internal services using their hostname without needing to specify the DNS server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Test the internal service&lt;/span&gt;
curl &lt;span class="s2"&gt;"http://nginx-internal.&amp;lt;INTERNAL_DOMAIN&amp;gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Public facing DNS records
&lt;/h2&gt;

&lt;p&gt;For public-facing applications, we'll be using Cloudflare as our DNS provider. Cloudflare is a content delivery network (CDN) that offers a fast, secure, and reliable network for websites and applications. On top of that, it provides DNS services, allowing us to manage domain names and resolve them to IP addresses easily.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I chose Cloudflare because their free tier lets us manage DNS records and resolve them to IP addresses without cost.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Both DNS records and TLS certificates will be managed through Cloudflare. We'll also take advantage of other features offered in their free plan, like Cloudflare Tunnels, which will simplify securely exposing our services to the public internet.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloudflare Tunnels
&lt;/h3&gt;

&lt;p&gt;Cloudflare Tunnels (CF Tunnels) let us expose internal services to the public internet without revealing our own IP address. As the name suggests, they act as a tunnel between the server running your service and Cloudflare itself. When a client accesses your public domain, the IP it sees will be one of Cloudflare's public IP addresses. CF then routes the traffic through the tunnel, letting it reach your infrastructure safely.&lt;/p&gt;

&lt;p&gt;This approach minimizes our attack surface. If we exposed our own IP, we'd be more vulnerable to attacks like DoS or DDoS. By letting Cloudflare handle the initial traffic, we automatically gain features like IP allowlists, attack protection, and traffic control - features we'd otherwise have to implement ourselves. Most importantly for us, it hides our IP, manages SSL certificates, and handles DNS records automatically.&lt;/p&gt;

&lt;p&gt;Luckily, there's a Kubernetes-friendly project called &lt;a href="https://github.com/adyanth/cloudflare-operator" rel="noopener noreferrer"&gt;cloudflare-operator&lt;/a&gt; that simplifies setting up CF Tunnels. It provides custom Kubernetes resources to manage tunnels directly from your cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing the Cloudflare Operator
&lt;/h3&gt;

&lt;p&gt;We can install the operator in our Kubernetes cluster via ArgoCD, just like we've done in previous chapters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# cf-operator.yaml&lt;/span&gt;
&lt;span class="c1"&gt;# ...&lt;/span&gt;
    &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
    &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/adyanth/cloudflare-operator.git&lt;/span&gt;
      &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config/default&lt;/span&gt;
    &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;in-cluster&lt;/span&gt;
      &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cf-operator&lt;/span&gt;
&lt;span class="c1"&gt;# ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we need a Cloudflare API token with the following permissions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloudflare Tunnel: Edit&lt;/li&gt;
&lt;li&gt;Account Settings: Read&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;&amp;lt;PUBLIC_DOMAIN&amp;gt;&lt;/code&gt; DNS: Edit&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We store this token as a Kubernetes secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# cf-api-token.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cf-api-token&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cf-operator&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;CLOUDFLARE_API_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;BASE64_ENCODED_TOKEN&amp;gt;"&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating a Tunnel
&lt;/h3&gt;

&lt;p&gt;We then define a ClusterTunnel resource to manage the Cloudflare tunnel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# cf-tunnel.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.cfargotunnel.com/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterTunnel&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cf-tunnel&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;newTunnel&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cf-tunnel&lt;/span&gt;
  &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;cloudflare&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;EMAIL&amp;gt;"&lt;/span&gt;
    &lt;span class="na"&gt;domain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;PUBLIC_DOMAIN&amp;gt;"&lt;/span&gt;
    &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cf-api-token&lt;/span&gt;
    &lt;span class="na"&gt;accountName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;ACCOUNT_NAME&amp;gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Expose application
&lt;/h3&gt;

&lt;p&gt;With the tunnel in place, we expose our apps using a &lt;code&gt;TunnelBinding&lt;/code&gt; resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# cf-expose-nginx.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.cfargotunnel.com/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TunnelBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;expose-nginx&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-default&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;fqdn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx.&amp;lt;PUBLIC_DOMAIN&amp;gt;&lt;/span&gt;
      &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://nginx.default.svc.cluster.local:8080&lt;/span&gt;
      &lt;span class="na"&gt;noTlsVerify&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="na"&gt;tunnelRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterTunnel&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cf-tunnel&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cloudflare automatically creates the DNS records and generates TLS certificates for the application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing the Public Setup
&lt;/h3&gt;

&lt;p&gt;To demonstrate how Cloudflare Tunnels works, let's deploy a simple nginx application and expose it via the tunnel previously created.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# nginx-external-test.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-external&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-external&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-external&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-external&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-external&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Apply the configuration:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Apply the configuration&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nginx-external-test.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Once applied, the following should happen:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The &lt;code&gt;nginx&lt;/code&gt; &lt;strong&gt;Deployment&lt;/strong&gt; will be created and the pod will start running.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;nginx&lt;/code&gt; &lt;strong&gt;Service&lt;/strong&gt; will be created, exposing the pod on port &lt;code&gt;80&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;cf-nginx&lt;/code&gt; &lt;strong&gt;TunnelBinding&lt;/strong&gt; will be created, linking the Cloudflare Tunnel to the nginx service.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Cloudflare Tunnel&lt;/strong&gt; will be established, allowing external traffic to reach the nginx service.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Cloudflare DNS&lt;/strong&gt; records will be created, pointing to the tunnel.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Cloudflare SSL&lt;/strong&gt; certificates will be issued for the nginx service.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;nginx&lt;/code&gt; service will be accessible via the public domain.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We can check the status of the tunnel using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl describe tunnelbinding cf-nginx &lt;span class="nt"&gt;-n&lt;/span&gt; default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where we'll see all of this happening directly from the resource events. It should look like this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Type    Reason          Age   From                 Message
  ----    ------          ----  ----                 -------
  Normal  Configuring     15m   cloudflare-operator  Configuring ConfigMap
  Normal  ApplyingConfig  15m   cloudflare-operator  Applying ConfigMap to Deployment
  Normal  AppliedConfig   15m   cloudflare-operator  ConfigMap applied to Deployment
  Normal  Configured      15m   cloudflare-operator  Configured Cloudflare Tunnel
  Normal  MetaSet         15m   cloudflare-operator  TunnelBinding Finalizer and Labels added
  Normal  CreatedDns      15m   cloudflare-operator  Inserted/Updated DNS/TXT entry
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;With these resources in place, the Cloudflare Tunnel can forward external traffic to the nginx service using the &lt;code&gt;TunnelBinding&lt;/code&gt; we created earlier. Users can now access the application via:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Test the application&lt;/span&gt;
curl &lt;span class="s2"&gt;"https://nginx.&amp;lt;PUBLIC_DOMAIN&amp;gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxpuu0xjngs9qxvt1jxy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxpuu0xjngs9qxvt1jxy.png" alt="Nginx Internal Landing Page" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This setup demonstrates the full flow: Cloudflare handles DNS &amp;amp; TLS, tunnels the traffic to our cluster, and the Service routes it to the Deployment pod.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this chapter, we tackled one of the most important steps in making our cluster truly usable from anywhere: DNS and SSL. We mapped out the architecture, set up Bind9 for rock-solid internal DNS, and leaned on Cloudflare for public-facing names - all with automation in mind. Thanks to ExternalDNS and Cert-Manager, record creation and TLS issuance now happen without manual intervention, keeping everything secure and up to date.&lt;/p&gt;

&lt;p&gt;With this in place, our homelab services have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A clean separation between internal and public DNS management.&lt;/li&gt;
&lt;li&gt;Automated DNS updates directly from Kubernetes resources.&lt;/li&gt;
&lt;li&gt;Seamless HTTPS access - internally and externally - without scary browser warnings.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The end result? Any service we spin up can be securely exposed, tested, and shared with almost no extra work. We're no longer manually juggling DNS zones or dealing with certificate renewal headaches - it's all declarative, reproducible, and in sync with our GitOps flow.&lt;/p&gt;

&lt;p&gt;From here, we can focus on deploying more useful applications, knowing that they'll &lt;em&gt;just work&lt;/em&gt; whether we're inside the lab or halfway across the world. In the next chapter, we'll start putting this setup to use by deploying real workloads and integrating them into our automated homelab stack.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://techquests.dev" rel="noopener noreferrer"&gt;https://techquests.dev&lt;/a&gt; on August 15, 2025.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>homelab</category>
      <category>selfhost</category>
      <category>cloud</category>
      <category>platform</category>
    </item>
    <item>
      <title>Home Lab: Chapter 6 — Kubernetes Ingress Controller</title>
      <dc:creator>Andre Nogueira</dc:creator>
      <pubDate>Fri, 03 Oct 2025 18:22:02 +0000</pubDate>
      <link>https://forem.com/aanogueira/home-lab-chapter-6-kubernetes-ingress-controller-2b96</link>
      <guid>https://forem.com/aanogueira/home-lab-chapter-6-kubernetes-ingress-controller-2b96</guid>
      <description>&lt;p&gt;Howdy,&lt;/p&gt;

&lt;p&gt;In this chapter, we're going to look at how to expose services running inside our Kubernetes cluster to the outside world using an Ingress Controller. We'll be using the NGINX Ingress Controller and taking full advantage of our GitOps setup with ArgoCD.&lt;/p&gt;

&lt;p&gt;Let's get into it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is an Ingress Controller?
&lt;/h2&gt;

&lt;p&gt;An Ingress Controller is a Kubernetes resource that handles external access to services running inside your cluster - usually over HTTP or HTTPS. You can think of it as a smart traffic router sitting at the edge of your cluster.&lt;/p&gt;

&lt;p&gt;It watches for &lt;code&gt;Ingress&lt;/code&gt; resources and knows how to route traffic accordingly. It also supports other nice things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Load balancing&lt;/li&gt;
&lt;li&gt;SSL termination&lt;/li&gt;
&lt;li&gt;Name-based virtual hosting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words, it gives you centralized control over how incoming traffic is handled.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preparing the Cluster
&lt;/h2&gt;

&lt;p&gt;Before installing the Ingress Controller, we need to make sure it can actually receive external traffic. For that, we'll expose it using a Kubernetes &lt;code&gt;Service&lt;/code&gt; of type &lt;code&gt;LoadBalancer&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The LoadBalancer service will act as the public entry point, and from there, the Ingress Controller will decide how to route the traffic internally.&lt;/p&gt;

&lt;p&gt;Now, since we're using Cilium as our CNI (as mentioned in &lt;a href="https://dev.tohomelab-chapter-3"&gt;Chapter 3&lt;/a&gt;), we've got a cool feature available to us: Cilium LB IPAM (Load Balancer IP Address Management). This lets us assign specific IPs to LoadBalancer services - perfect for when we want to reserve a static IP for our Ingress Controller.&lt;/p&gt;

&lt;p&gt;This is especially useful if we plan to point a DNS record to the controller later, which is exactly what we'll do in future chapters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assigning a Static IP with Cilium
&lt;/h2&gt;

&lt;p&gt;To assign a static IP using Cilium, we need to define a &lt;code&gt;CiliumLoadBalancerIPPool&lt;/code&gt; object. This object tells Cilium which IPs it can use for LoadBalancer services, and under what conditions.&lt;/p&gt;

&lt;p&gt;Here's the configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# ippool.yaml&lt;/span&gt;
&lt;span class="nt"&gt;---&lt;/span&gt;
apiVersion: cilium.io/v2alpha1
kind: CiliumLoadBalancerIPPool
metadata:
  name: default-pool
spec:
  blocks:
    - cidr: &lt;span class="s2"&gt;"x.x.x.105/32"&lt;/span&gt;
  serviceSelector:
    matchLabels:
      &lt;span class="s2"&gt;"io.kubernetes.service.namespace"&lt;/span&gt;: &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will assign the IP x.x.x.105 to any LoadBalancer service in the ingress-nginx namespace.&lt;/p&gt;

&lt;p&gt;To apply it, we can run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Apply the configuration&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ippool.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this in place, the Ingress Controller will get that static IP when we install it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing NGINX Ingress Controller
&lt;/h2&gt;

&lt;p&gt;Now that the IP pool is in place, we can move on to installing the NGINX Ingress Controller. Installing the NGINX Ingress Controller is straightforward. Like ArgoCD and Cilium, we'll use its official Helm chart. But since we already have a fully working GitOps setup (thanks to our ArgoCD configuration in the previous chapter), we can add the NGINX Ingress Controller to the Git repository and have it installed by ArgoCD. We can do this by creating an &lt;code&gt;Application&lt;/code&gt; object that will define the NGINX Ingress Controller. We'll define it in a &lt;code&gt;ingress-nginx.yaml&lt;/code&gt; file with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ingress-nginx.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
  &lt;span class="na"&gt;finalizers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;resources-finalizer.argocd.argoproj.io&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.github.io/ingress-nginx&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;4.12.1&lt;/span&gt;
    &lt;span class="na"&gt;helm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;valuesObject&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;controller&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;
            &lt;span class="na"&gt;externalTrafficPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cluster&lt;/span&gt;
            &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;io.cilium/lb-ipam-ips&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;x.x.x.105'&lt;/span&gt;
          &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;in-cluster&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
  &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;allowEmpty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;syncOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CreateNamespace=true&lt;/span&gt;
    &lt;span class="na"&gt;managedNamespaceMetadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;pod-security.kubernetes.io/enforce&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;privileged&lt;/span&gt;
        &lt;span class="na"&gt;pod-security.kubernetes.io/enforce-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;latest&lt;/span&gt;
  &lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Description:'&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress controller&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file defines an ArgoCD &lt;code&gt;Application&lt;/code&gt; that will install the NGINX Ingress Controller using the Helm chart from the official repository. Here's a quick breakdown of what each part does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;service.type: LoadBalancer&lt;/code&gt;: creates a service of type &lt;code&gt;LoadBalancer&lt;/code&gt; that will expose the Ingress Controller to the outside world.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;service.externalTrafficPolicy: Cluster&lt;/code&gt;: allows the traffic to be routed to all the nodes in the cluster instead of just the node where the pod is running.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;io.cilium/lb-ipam-ips&lt;/code&gt;: assigns the previous allocated IP address to the Load Balancer service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;metrics.enabled: true&lt;/code&gt;: enables the metrics server for the Ingress Controller, exposing metrics like the number of requests, response time, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;pod-security.kubernetes.io/enforce&lt;/code&gt;: sets the Pod Security Standards to &lt;code&gt;privileged&lt;/code&gt;, allowing the Ingress Controller to run with elevated privileges. This is necessary for the Ingress Controller to function correctly.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once this file is added to your Git repo, ArgoCD will pick it up and deploy everything.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We can follow the progress of the installation by checking the ArgoCD UI by running &lt;code&gt;kubectl port-forward svc/argocd-server -n argocd 8080:80&lt;/code&gt; and navigating to &lt;code&gt;http://localhost:8080&lt;/code&gt;. You can log in with the default credentials (&lt;code&gt;admin&lt;/code&gt;/&lt;code&gt;admin&lt;/code&gt;) or the credentials you set up in the previous chapter.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Testing the Ingress Controller
&lt;/h2&gt;

&lt;p&gt;Once ArgoCD has deployed the controller, we can check if it's working by making a simple curl request to the IP we assigned:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Make a request to the Ingress Controller&lt;/span&gt;
curl http://x.x.x.105
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We now should see something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;## Response from the Ingress Controller
&lt;span class="nt"&gt;&amp;lt;html&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;head&amp;gt;&amp;lt;title&amp;gt;&lt;/span&gt;404 Not Found&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&amp;lt;/head&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;body&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;center&amp;gt;&amp;lt;h1&amp;gt;&lt;/span&gt;404 Not Found&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&amp;lt;/center&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;hr&amp;gt;&amp;lt;center&amp;gt;&lt;/span&gt;nginx&lt;span class="nt"&gt;&amp;lt;/center&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That 404 is actually a good sign - it means the Ingress Controller is up, responding to requests, and just doesn't have any routes defined yet (because we haven't created any Ingress resources).&lt;/p&gt;

&lt;h2&gt;
  
  
  Visual Recap
&lt;/h2&gt;

&lt;p&gt;Here's what we just set up, in diagram form:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04vq8vxvmoe6t4adjfas.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04vq8vxvmoe6t4adjfas.png" alt="Kubernetes Ingress Controller Setup" width="800" height="1605"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this chapter, we set up the foundation for managing external traffic in our Kubernetes cluster. Here's what we accomplished:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We used Cilium's LB IPAM feature to assign a static IP to a LoadBalancer service.&lt;/li&gt;
&lt;li&gt;We installed the NGINX Ingress Controller using ArgoCD and Helm.&lt;/li&gt;
&lt;li&gt;We validated that the controller is working by sending it a direct request.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we've got an Ingress Controller up and reachable from the outside, we can move on to the next step: making it actually useful by adding routing rules, setting up DNS, and handling SSL/TLS termination.&lt;/p&gt;

&lt;p&gt;In the next chapter, we'll look at how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure DNS records pointing to your static IP&lt;/li&gt;
&lt;li&gt;Automatically issue and renew SSL certificates using cert-manager&lt;/li&gt;
&lt;li&gt;Route traffic to real services using Ingress resources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We're just getting started with Ingress - but the foundation is solid. Catch you in the next one.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://techquests.dev" rel="noopener noreferrer"&gt;https://techquests.dev&lt;/a&gt; on July 29, 2025.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>homelab</category>
      <category>selfhost</category>
      <category>cloud</category>
      <category>platform</category>
    </item>
    <item>
      <title>Home Lab: Chapter 5 — Kubernetes Managing Secrets</title>
      <dc:creator>Andre Nogueira</dc:creator>
      <pubDate>Fri, 03 Oct 2025 18:05:53 +0000</pubDate>
      <link>https://forem.com/aanogueira/home-lab-chapter-5-kubernetes-managing-secrets-144h</link>
      <guid>https://forem.com/aanogueira/home-lab-chapter-5-kubernetes-managing-secrets-144h</guid>
      <description>&lt;p&gt;Howdy,&lt;/p&gt;

&lt;p&gt;Secrets are a fundamental part of any application - it's how we securely store sensitive information. In Kubernetes, there are several approaches to handling secrets. In this chapter, we'll explore different ways to manage secrets in Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Secret?
&lt;/h2&gt;

&lt;p&gt;First off, what exactly is a Secret? A Secret is a Kubernetes object that holds a small amount of sensitive data - like a password, token, or key. Without Secrets, you might have to hard-code these values into Pod specs or container images. Users can create Secrets manually, and Kubernetes also generates some automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our Scenario
&lt;/h2&gt;

&lt;p&gt;In the previous chapter, we already needed to work with some sensitive data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We generated Talos Secrets, which include a bundle of crucial credentials.&lt;/li&gt;
&lt;li&gt;We created ArgoCD Secrets, which hold a GitHub private key for repository access.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the question is: &lt;em&gt;How can we securely manage these secrets while still getting the automation benefits from Kubernetes and ArgoCD?&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Secrets Management Options
&lt;/h2&gt;

&lt;p&gt;You've got a few choices for managing secrets in Kubernetes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Built-in Kubernetes Secrets&lt;/li&gt;
&lt;li&gt;Third-party secret managers (Vault, AWS/Azure/GCP Key Management, etc.)&lt;/li&gt;
&lt;li&gt;Kubernetes operators (e.g. Sealed Secrets by Bitnami)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For simplicity in this chapter, we'll stick with built-in Kubernetes Secrets - though we may revisit other options later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes Secrets
&lt;/h2&gt;

&lt;p&gt;Kubernetes Secrets let you store things like passwords, OAuth tokens, and SSH keys securely. It's much safer and more flexible than embedding secrets directly in Pod specs or container images. However, remember: Kubernetes Secrets are only &lt;strong&gt;Base64&lt;/strong&gt; encoded by default, &lt;strong&gt;not encrypted&lt;/strong&gt;. So if you require encryption at rest, layer in a third-party tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Secrets
&lt;/h2&gt;

&lt;p&gt;While Kubernetes makes it easy to use Secrets, we still need an automated and secure way to manage them. That's where ArgoCD comes in. With ArgoCD, we can store our secrets in a Git repository and use a Kustomize overlay to apply them to the cluster.&lt;/p&gt;

&lt;p&gt;Kustomize lets you customize raw, template-free YAML files without altering the originals. This means we can keep clean, reusable manifests while layering in environment-specific or sensitive configurations, like secrets, on top.&lt;/p&gt;

&lt;p&gt;So far, so good. But you might be wondering:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"How can we safely store secrets in Git?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Great question! The answer is encryption - and this is where &lt;a href="https://github.com/getsops/sops" rel="noopener noreferrer"&gt;SOPS&lt;/a&gt; comes in.&lt;/p&gt;

&lt;p&gt;SOPS (Secrets OPerationS) is a flexible tool that encrypts files in a way that lets you decrypt them later when needed. It supports various encryption backends: PGP, GnuPG, AWS KMS, Azure Key Vault, Google Cloud KMS, Vault, and more.&lt;/p&gt;

&lt;p&gt;For this setup, I'll be using &lt;a href="https://github.com/FiloSottile/age" rel="noopener noreferrer"&gt;age&lt;/a&gt; - a modern, simple, and secure encryption tool that serves as a lightweight alternative to GPG.&lt;/p&gt;

&lt;p&gt;By combining:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kustomize&lt;/li&gt;
&lt;li&gt;SOPS + age&lt;/li&gt;
&lt;li&gt;And the KSOPS plugin (a Kustomize plugin for SOPS)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;... we get a powerful GitOps-friendly workflow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnp8dj4gxgzobiuomkb5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnp8dj4gxgzobiuomkb5.png" alt="Secrets Management Flow" width="800" height="1011"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This setup allows us to store encrypted secrets in Git, and ArgoCD will automatically decrypt them using KSOPS when applying them to the cluster. This way, we can manage our secrets securely while still benefiting from GitOps automation.&lt;/p&gt;

&lt;p&gt;Once the secrets are decrypted, they are applied to the cluster as standard Kubernetes Secrets, which can then be consumed by Pods and other resources.&lt;/p&gt;

&lt;h1&gt;
  
  
  How to use KSOPS
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Generate an age key pair:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;## Generate the age keys&lt;/span&gt;
age-keygen &lt;span class="nt"&gt;-o&lt;/span&gt; ~/.config/sops/age/keys.txt
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;We should get as the output of this command:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;age-keygen &lt;span class="nt"&gt;-o&lt;/span&gt; key.txt
Public key: age1efe0s548vkwgvjkdtgu4exf9v4mtltjv6rn5yww33yd75ad7r5xsjq7f8l
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a Kubernetes secret to store the age public key:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;## Create the sops-age secret&lt;/span&gt;
kubectl create secret generic sops-age &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; argocd &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;keys.txt&lt;span class="o"&gt;=&lt;/span&gt;~/.config/sops/age/keys.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a &lt;code&gt;.sops.yaml&lt;/code&gt; to define which files/fields should be encrypted:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .sops.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;stores&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;yaml&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;indent&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
&lt;span class="na"&gt;creation_rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path_regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secrets.yaml&lt;/span&gt;
    &lt;span class="s"&gt;encrypted_regex&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;^(id|secret|bootstraptoken|secretboxencryptionsecret|token|ca|crt|key)$'&lt;/span&gt;
    &lt;span class="na"&gt;age&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;age1efe0s548vkwgvjkdtgu4exf9v4mtltjv6rn5yww33yd75ad7r5xsjq7f8l&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This configuration instructs SOPS to encrypt the secrets.yaml file using the &lt;code&gt;age1efe0s548vkwgvjkdtgu4exf9v4mtltjv6rn5yww33yd75ad7r5xsjq7f8l&lt;/code&gt; public key. It targets any field that matches the following regular expression:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The &lt;code&gt;age1efe0s548vkwgvjkdtgu4exf9v4mtltjv6rn5yww33yd75ad7r5xsjq7f8l&lt;/code&gt; key is the public key generated during the creation of the &lt;code&gt;sops-age&lt;/code&gt; Kubernetes secret&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Encrypt your file:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;## Encrypt the file content&lt;/span&gt;
ksops &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; secrets.yaml
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;If we now inspect the &lt;code&gt;secrets.yaml&lt;/code&gt; file, we'll see that the content is now&lt;br&gt;
encrypted. To decrypt the file content back to its original state, we can run&lt;br&gt;
the following command:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;## Decrypt the file content&lt;/span&gt;
ksops &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; secrets.yaml
&lt;/code&gt;&lt;/pre&gt;


&lt;blockquote&gt;
&lt;p&gt;Decrypting the file content requires the private key that we generated when we created the &lt;code&gt;sops-age&lt;/code&gt; secret.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Config ArgoCD to manage Secrets
&lt;/h2&gt;

&lt;p&gt;To configure ArgoCD to manage secrets, we need to tweak the &lt;code&gt;values.yaml&lt;/code&gt; file that we created in the previous &lt;a href="https://dev.to/aanogueira/home-lab-chapter-4-kubernetes-gitops-with-argocd-2ml4"&gt;Chapter 4&lt;/a&gt; and add the following configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# values.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;repoServer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;XDG_CONFIG_HOME&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/.config&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SOPS_AGE_KEY_FILE&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/.config/sops/age/keys.txt&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;custom-tools&lt;/span&gt;
      &lt;span class="na"&gt;emptyDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sops-age&lt;/span&gt;
      &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sops-age&lt;/span&gt;
  &lt;span class="na"&gt;initContainers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;install-ksops&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;viaductoss/ksops:v4.3.3&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/bin/sh'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-c'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;echo "Installing KSOPS...";&lt;/span&gt;
          &lt;span class="s"&gt;mv ksops /custom-tools/;&lt;/span&gt;
          &lt;span class="s"&gt;mv kustomize /custom-tools/;&lt;/span&gt;
          &lt;span class="s"&gt;echo "Done.";&lt;/span&gt;
      &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/custom-tools&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;custom-tools&lt;/span&gt;
  &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/local/bin/kustomize&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;custom-tools&lt;/span&gt;
      &lt;span class="na"&gt;subPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kustomize&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/.config/kustomize/plugin/viaduct.ai/v1/ksops/ksops&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;custom-tools&lt;/span&gt;
      &lt;span class="na"&gt;subPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ksops&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/.config/sops/age/keys.txt&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sops-age&lt;/span&gt;
      &lt;span class="na"&gt;subPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keys.txt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration enables ArgoCD to manage secrets securely using KSOPS and age. It sets the &lt;code&gt;XDG_CONFIG_HOME&lt;/code&gt; environment variable to &lt;code&gt;/.config&lt;/code&gt;, directing SOPS to look for its configuration files there. The &lt;code&gt;SOPS_AGE_KEY_FILE&lt;/code&gt; is set to &lt;code&gt;/.config/sops/age/keys.txt&lt;/code&gt;, so SOPS can locate the age private key used for decryption.&lt;/p&gt;

&lt;p&gt;Two volumes are defined: &lt;code&gt;custom-tools&lt;/code&gt;, an &lt;code&gt;emptyDir&lt;/code&gt; volume for storing the KSOPS and Kustomize binaries, and &lt;code&gt;sops-age&lt;/code&gt;, a secret volume that holds the age key file. An &lt;code&gt;initContainer&lt;/code&gt; named &lt;code&gt;install-ksops&lt;/code&gt; installs the necessary binaries into the &lt;code&gt;custom-tools&lt;/code&gt; volume before the main ArgoCD container starts.&lt;/p&gt;

&lt;p&gt;The volumes are mounted inside the pod: custom-tools is mounted at &lt;code&gt;/usr/local/bin/kustomize&lt;/code&gt; and &lt;code&gt;/custom-tools&lt;/code&gt; to make the binaries accessible, while &lt;code&gt;sops-age&lt;/code&gt; is mounted at &lt;code&gt;/.config/sops/age/keys.txt&lt;/code&gt; so that the decryption key is available for SOPS during runtime.&lt;/p&gt;

&lt;p&gt;We also need to create the &lt;code&gt;sops-age&lt;/code&gt; secret that contains the age keys file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Generate the age keys&lt;/span&gt;
age-keygen &lt;span class="nt"&gt;-o&lt;/span&gt; ~/.config/sops/age/keys.txt

&lt;span class="c"&gt;## Create the secret&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; ~/.config/sops/age/keys.txt | kubectl create secret generic sops-age &lt;span class="nt"&gt;--namespace&lt;/span&gt; argocd &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;keys.txt&lt;span class="o"&gt;=&lt;/span&gt;/dev/stdin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the secrets now created, the only thing left to do is apply the&lt;br&gt;
new configuration to the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Upgrade the ArgoCD installation&lt;/span&gt;
helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; argocd argo/argo-cd &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--namespace&lt;/span&gt; argocd &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--values&lt;/span&gt; values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Voilà! ArgoCD is now KSOPS-enabled with age key support. We now have a secure and automated way to manage our secrets in a Git repository and still have them applied to the cluster in a secure way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding Secrets to the Git
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create your secret manifest, e.g. &lt;code&gt;example-secret.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# example-secret.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-secret&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;foo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bar&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add an entry in &lt;code&gt;.sops.yaml&lt;/code&gt; to match and encrypt &lt;code&gt;foo&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .sops.yaml&lt;/span&gt;
&lt;span class="c1"&gt;# ...&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path_regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-secret.yaml&lt;/span&gt;
  &lt;span class="na"&gt;encrypted_regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;^foo'&lt;/span&gt;
  &lt;span class="na"&gt;age&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;age1efe0s548vkwgvjkdtgu4exf9v4mtltjv6rn5yww33yd75ad7r5xsjq7f8l&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;And then we can encrypt it by running:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;## Encrypt the file content&lt;/span&gt;
ksops &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; example-secret.yaml
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;If we now inspect the &lt;code&gt;example-secret.yaml&lt;/code&gt; file, we'll see that the content is&lt;br&gt;
now encrypted:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# example-secret.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-secret&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;foo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ENC[AES256_GCM,data:s7FsAPs=,iv:ywvzww/Jq342vkENSEXLxopD8aAf3jCE0TPfwILJz1Q=,tag:DYFKhAVr7pgf1cW5+cevbw==,type:str]&lt;/span&gt;
&lt;span class="na"&gt;sops&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
  &lt;span class="na"&gt;gcp_kms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
  &lt;span class="na"&gt;azure_kv&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
  &lt;span class="na"&gt;hc_vault&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
  &lt;span class="na"&gt;age&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;recipient&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;age1efe0s548vkwgvjkdtgu4exf9v4mtltjv6rn5yww33yd75ad7r5xsjq7f8l&lt;/span&gt;
      &lt;span class="na"&gt;enc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;-----BEGIN AGE ENCRYPTED FILE-----&lt;/span&gt;
        &lt;span class="s"&gt;YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSB0cXo4YTNaYTdGT1Y3U29N&lt;/span&gt;
        &lt;span class="s"&gt;WFR4WlJISTRUaU1jTXVzTUFqZCsxQVBoaEZvCmRQZVAzMS9ZM3RqbTlrSEdyUmJj&lt;/span&gt;
        &lt;span class="s"&gt;ejFZNDVlMEpZY2s3Z1VSdTdQYWk3MmMKLS0tIFQxVUhibEVHSVJtb09XNkRxcVN5&lt;/span&gt;
        &lt;span class="s"&gt;TUVIcmdaRloyUVZzckNMbkpVVXo5WjQK70C/ZvuailOheaSXMM5Rx+CGXZ9K98tw&lt;/span&gt;
        &lt;span class="s"&gt;++Q6PZPafdZxkwSIRjZU6ihAk0L6TXs3MJ93yvn/n3CA9zQp9tDuXg==&lt;/span&gt;
        &lt;span class="s"&gt;-----END AGE ENCRYPTED FILE-----&lt;/span&gt;
  &lt;span class="na"&gt;lastmodified&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2025-04-02T21:12:33Z'&lt;/span&gt;
  &lt;span class="na"&gt;mac&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ENC[AES256_GCM,data:t3MIPtm19pt+Ov27VkQvrDM/4IN48KXiOpQQlP1czWn12sv68pMt/fALxnrSM3jgv2q0reG5j9vJlA9zFPVw8sdudZ7mmY+HoFIfp8ryZOqX1Ro2hBPR4aj9eXBZT5Gjwf8eOYgYKRdOev6pRmtTA5wJ2qRAZkhvBm3mHHp7d+E=,iv:57/nNwc25K0J632kgo7MX7J0FyUN13ED7wwww9qOAMQ=,tag:qV1/qTDyAUrIFah2AeXrmQ==,type:str]&lt;/span&gt;
  &lt;span class="na"&gt;pgp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
  &lt;span class="na"&gt;encrypted_regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;^foo&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;3.9.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;As we can see, the &lt;code&gt;foo&lt;/code&gt; field is now encrypted, and a new &lt;code&gt;sops&lt;/code&gt; field has been added, which contains the encryption metadata.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;kms&lt;/code&gt;, &lt;code&gt;gcp_kms&lt;/code&gt;, &lt;code&gt;azure_kv&lt;/code&gt;, &lt;code&gt;hc_vault&lt;/code&gt;, and &lt;code&gt;pgp&lt;/code&gt; fields are lists of encryption keys for their respective backends. Since none of these were used in this case, they are empty.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;age&lt;/code&gt; field lists the age keys used to encrypt the file. Here, we used the &lt;code&gt;age1efe0s548vkwgvjkdtgu4exf9v4mtltjv6rn5yww33yd75ad7r5xsjq7f8l&lt;/code&gt; key, so the field includes both the recipient and the corresponding encrypted payload.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;lastmodified&lt;/code&gt; field records when the file was last updated.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;mac&lt;/code&gt; field is a message authentication code used to verify the file's integrity and ensure it hasn't been tampered with.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;encrypted_regex&lt;/code&gt; field specifies a regular expression used by SOPS to determine which fields in the document should be encrypted.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;version&lt;/code&gt; field indicates the SOPS file format version used for encryption.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add a Kustomize overlay by creating a &lt;code&gt;kustomization.yaml&lt;/code&gt; file along with a KSOPS generator:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# kustomization.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kustomize.config.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Kustomization&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-secret&lt;/span&gt;
&lt;span class="na"&gt;generators&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;example-secret-generator.yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;And the associated &lt;code&gt;generator&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# example-secret-generator.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;viaduct.ai/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ksops&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-secret-generator&lt;/span&gt;
&lt;span class="na"&gt;files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;example-secret.yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This overlay tells Kustomize to use the KSOPS plugin to decrypt &lt;code&gt;example-secret.yaml&lt;/code&gt; before applying it to the cluster, enabling secure GitOps-driven secrets management.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now you can push the changes to your Git repository and let ArgoCD handle the deployment. When ArgoCD applies the configuration, it will automatically decrypt the secret using the provided Age keys and apply it to the cluster as a standard Kubernetes Secret.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ensure that ArgoCD is already configured to track the correct Git repository and that the &lt;code&gt;example-secret.yaml&lt;/code&gt; file is located in the expected directory.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With that, the setup is complete. You now have a secure and automated workflow for managing secrets through Git and ArgoCD.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this chapter I've demonstrated how to securely manage Kubernetes secrets using GitOps principles. By integrating KSOPS with ArgoCD and Kustomize, we can encrypt secrets, store them in Git, and have them decrypted and applied to the cluster automatically.&lt;/p&gt;

&lt;p&gt;While this setup offers a solid foundation, it's not the most advanced solution in terms of security. For production environments requiring features like access controls, audit logging, or automatic key rotation, consider tools such as HashiCorp Vault or a Kubernetes-native solution like Sealed Secrets.&lt;/p&gt;

&lt;p&gt;That said, this approach strikes a good balance between simplicity, security, and GitOps compatibility - making it an excellent starting point for secret management in Kubernetes.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://techquests.dev" rel="noopener noreferrer"&gt;https://techquests.dev&lt;/a&gt; on July 2, 2025.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>homelab</category>
      <category>selfhost</category>
      <category>cloud</category>
      <category>platform</category>
    </item>
    <item>
      <title>Home Lab: Chapter 4 — Kubernetes GitOps with ArgoCD</title>
      <dc:creator>Andre Nogueira</dc:creator>
      <pubDate>Tue, 10 Jun 2025 15:08:44 +0000</pubDate>
      <link>https://forem.com/aanogueira/home-lab-chapter-4-kubernetes-gitops-with-argocd-2ml4</link>
      <guid>https://forem.com/aanogueira/home-lab-chapter-4-kubernetes-gitops-with-argocd-2ml4</guid>
      <description>&lt;p&gt;Howdy!&lt;/p&gt;

&lt;p&gt;Ever since I discovered GitOps, I've been in love with the concept. The idea of managing all your infrastructure configuration from a centralized Git repository - and having a tool automatically apply those changes - is incredibly powerful.&lt;/p&gt;

&lt;p&gt;GitOps brings together Infrastructure as Code (IaC) and Continuous Integration/Continuous Deployment (CI/CD) in a seamless, declarative workflow. It's the ideal way to manage a Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;In the past, I've used &lt;a href="https://fluxcd.io/" rel="noopener noreferrer"&gt;Flux&lt;/a&gt; to implement GitOps, but I've always been curious about &lt;a href="https://argo-cd.readthedocs.io/en/stable/" rel="noopener noreferrer"&gt;ArgoCD&lt;/a&gt;. After hearing so many good things about it, I decided it was finally time to give it a try - and this project was the perfect opportunity.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is ArgoCD?
&lt;/h2&gt;

&lt;p&gt;ArgoCD is a declarative GitOps continuous delivery tool for Kubernetes. It follows the GitOps pattern of using Git repositories as the source of truth for defining the desired application state.&lt;/p&gt;

&lt;p&gt;It is implemented as a Kubernetes controller that continuously monitors running applications and compares their current live state against the desired target state (as defined in Git). A deployment is considered in sync when the live state matches the target state. If they differ, ArgoCD performs a &lt;code&gt;kubectl apply&lt;/code&gt; to reconcile the live state with the target state.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;Installing ArgoCD is straightforward. We can use the official Helm chart. First, add the ArgoCD Helm repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create the namespace&lt;/span&gt;
kubectl create namespace argocd

&lt;span class="c"&gt;# Add the repository&lt;/span&gt;
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we're using Cilium as the CNI, we need to exclude Cilium resources from ArgoCD's control. Create a &lt;code&gt;values.yaml&lt;/code&gt; file with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# values.yaml&lt;/span&gt;
&lt;span class="na"&gt;configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;cm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;resource.exclusions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;- apiGroups:&lt;/span&gt;
          &lt;span class="s"&gt;- cilium.io&lt;/span&gt;
        &lt;span class="s"&gt;kinds:&lt;/span&gt;
          &lt;span class="s"&gt;- CiliumIdentity&lt;/span&gt;
        &lt;span class="s"&gt;clusters:&lt;/span&gt;
          &lt;span class="s"&gt;- "*"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This prevents ArgoCD from managing Cilium resources, which could interfere with Cilium's operation. For more details, see &lt;a href="https://docs.cilium.io/en/latest/configuration/argocd-issues/" rel="noopener noreferrer"&gt;Troubleshooting Cilium deployed with Argo CD&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We can also provide a custom admin password for the ArgoCD UI. The password must be hashed. You can generate a hash using &lt;code&gt;htpasswd&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Generate the password hash&lt;/span&gt;
htpasswd &lt;span class="nt"&gt;-nbBC&lt;/span&gt; 10 &lt;span class="s2"&gt;""&lt;/span&gt; &amp;lt;PASSWORD&amp;gt; | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;':\n'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then add it to the &lt;code&gt;values.yaml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# values.yaml&lt;/span&gt;
&lt;span class="na"&gt;configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;argocdServerAdminPassword&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;PASSWORD_HASH&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now install the chart:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; argocd argo/argo-cd &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--namespace&lt;/span&gt; argocd &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--values&lt;/span&gt; values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This installs ArgoCD into the &lt;code&gt;argocd&lt;/code&gt; namespace. You can access the UI via port forwarding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward svc/argocd-server &lt;span class="nt"&gt;-n&lt;/span&gt; argocd 8080:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Add Repository
&lt;/h2&gt;

&lt;p&gt;With ArgoCD installed, the next step is to connect a Git repository containing your Kubernetes manifests. ArgoCD includes an operator and several CRDs (Custom Resource Definitions). We use the &lt;code&gt;Application&lt;/code&gt; CRD to define which repository and path to sync.&lt;/p&gt;

&lt;p&gt;Here's an example &lt;code&gt;init.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# init.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;init&lt;/span&gt;
  &lt;span class="na"&gt;finalizers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;resources-finalizer.argocd.argoproj.io&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;REPO_URL&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HEAD&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;PATH&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;in-cluster&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
  &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Description:'&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Entrypoint&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;all&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;homelab&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;apps'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;For more details, see the &lt;a href="https://argo-cd.readthedocs.io/en/latest/user-guide/application-specification" rel="noopener noreferrer"&gt;ArgoCD Application Spec&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If your repository is private, create a secret with credentials and label it for ArgoCD:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# github.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;github&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;argocd.argoproj.io/secret-type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;repository&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git&lt;/span&gt;
  &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;REPO_URL&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;sshPrivateKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;SSH_PRIVATE_KEY&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the secret and application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; github.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; init.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once applied, you should see the application appear in the ArgoCD UI.&lt;/p&gt;

&lt;p&gt;You can access the UI at by port forwarding the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward svc/argocd-server &lt;span class="nt"&gt;-n&lt;/span&gt; argocd 8080:443
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then navigate to &lt;code&gt;http://localhost:8080&lt;/code&gt; in your browser. The default username is &lt;code&gt;admin&lt;/code&gt;, and the password is the one you set earlier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding Applications
&lt;/h2&gt;

&lt;p&gt;With the repository connected, you can start adding applications. Here's an example &lt;code&gt;app.yaml&lt;/code&gt; to deploy a Helm chart:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# app.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;APP_NAME&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
  &lt;span class="na"&gt;finalizers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;resources-finalizer.argocd.argoproj.io&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;CHART_NAME&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;CHART_REPO&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;CHART_VERSION&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;helm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;in-cluster&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;NAMESPACE&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;allowEmpty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;syncOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CreateNamespace=true&lt;/span&gt;
  &lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Description:'&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;My first Application with ArgoCD&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Ensure this file is in the &lt;code&gt;&amp;lt;PATH&amp;gt;&lt;/code&gt; specified in your init.yaml.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Push the file to your repository, and ArgoCD will detect it. You can then sync it through the UI and watch your application get deployed - this is the magic of GitOps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;ArgoCD is a powerful tool that enables managing Kubernetes clusters declaratively through Git. By treating your Git repository as the source of truth, you gain version control, automation, and a clear audit trail of infrastructure changes.&lt;/p&gt;

&lt;p&gt;It's an excellent way to combine IaC and CI/CD - and I'm excited to explore what more I can do with it!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://techquests.dev" rel="noopener noreferrer"&gt;https://techquests.dev&lt;/a&gt; on May 31, 2025.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>homelab</category>
      <category>selfhost</category>
      <category>cloud</category>
      <category>platform</category>
    </item>
    <item>
      <title>Home Lab: Chapter 3 — Kubernetes Setup</title>
      <dc:creator>Andre Nogueira</dc:creator>
      <pubDate>Thu, 15 May 2025 23:28:48 +0000</pubDate>
      <link>https://forem.com/aanogueira/home-lab-chapter-3-kubernetes-setup-23me</link>
      <guid>https://forem.com/aanogueira/home-lab-chapter-3-kubernetes-setup-23me</guid>
      <description>&lt;p&gt;Howdy!&lt;/p&gt;

&lt;p&gt;In this chapter, I'll walk through the setup of the Kubernetes cluster. For the Operating System (OS) of the nodes, I'll be using &lt;a href="https://talos.dev/" rel="noopener noreferrer"&gt;Talos&lt;/a&gt;. As mentioned earlier, the cluster will consist of three physical machines. Since Kubernetes uses a control-plane/worker model and we only have three nodes, each one will serve as both a control-plane and a worker. This setup allows workloads to be scheduled on all nodes while maintaining control-plane functionality.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Talos?
&lt;/h2&gt;

&lt;p&gt;Talos is a modern, minimalistic operating system designed specifically to run Kubernetes-and nothing else. It is immutable, meaning the OS is read-only and cannot be modified. This immutability improves security, making it more difficult for attackers to alter the system.&lt;/p&gt;

&lt;p&gt;Talos is also built to be managed entirely via Kubernetes, simplifying cluster operations. On a personal note, it's a project I’ve been following for some time, and I’m excited to finally try it out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Talos
&lt;/h2&gt;

&lt;p&gt;Talos provides a command-line tool called &lt;code&gt;talosctl&lt;/code&gt;, which is used to manage and interact with the cluster. Similar to how &lt;code&gt;kubectl&lt;/code&gt; is used for managing Kubernetes resources, &lt;code&gt;talosctl&lt;/code&gt; is used to create, configure, and operate the Talos-based infrastructure itself.&lt;/p&gt;

&lt;p&gt;To setup the cluster, you first need to download the &lt;code&gt;talosctl&lt;/code&gt; binary. You can download it from the &lt;a href="https://github.com/siderolabs/talos/releases" rel="noopener noreferrer"&gt;Talos releases page&lt;/a&gt; or you can use the following command to download it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Linux&lt;/span&gt;
curl &lt;span class="nt"&gt;-sL&lt;/span&gt; https://talos.dev/install | sh

&lt;span class="c"&gt;# MacOS&lt;/span&gt;
brew &lt;span class="nb"&gt;install &lt;/span&gt;siderolabs/tap/talosctl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I'll go through each step I took to bootstrap my cluster, but in short, the process to install Talos OS involves the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download the Talos image&lt;/li&gt;
&lt;li&gt;Flash the image to a USB drive&lt;/li&gt;
&lt;li&gt;Boot the node from the USB drive&lt;/li&gt;
&lt;li&gt;Install Talos on the nodes&lt;/li&gt;
&lt;li&gt;Reboot the nodes&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Preparing Nodes
&lt;/h3&gt;

&lt;p&gt;Before we begin the configuration, there’s some initial setup we need to complete-specifically, assigning IP addresses to the nodes. Each node will be given a dedicated IP address to make identification and management easier. All nodes will be connected to the DMZ network, and for simplicity, we’ll assign their IPs using DHCP.&lt;/p&gt;

&lt;p&gt;While DHCP typically assigns IP addresses dynamically, we can configure static leases to ensure each node always receives the same IP. This is done by mapping each node’s MAC address to a specific IP address in the DHCP settings.&lt;/p&gt;

&lt;p&gt;To do this, we’ll go into the DHCP configuration on the OPNSense interface and set up static mappings for each node.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node 1: &lt;code&gt;x.x.x.101&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Node 2: &lt;code&gt;x.x.x.102&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Node 3: &lt;code&gt;x.x.x.103&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition, we can restrict the range of IPs in the DHCP pool - for example, from &lt;code&gt;x.x.x.101&lt;/code&gt; to &lt;code&gt;x.x.x.104&lt;/code&gt; - since we also want to reserve an IP for the NAS. This limited range ensures that only a small, predefined set of IP addresses is available for assignment. It adds an extra layer of control by preventing the DHCP server from assigning addresses to unexpected devices that might join the network.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up the Nodes
&lt;/h3&gt;

&lt;p&gt;To set up the nodes, we'll first need to download the Talos image and flash it to a USB drive. The image can be obtained from the &lt;a href="https://github.com/siderolabs/talos/releases" rel="noopener noreferrer"&gt;Talos releases page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once downloaded, you can flash the image to a USB drive using a tool like dd. Here’s an example command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo dd &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;talos.iso &lt;span class="nv"&gt;of&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/dev/sdX &lt;span class="nv"&gt;bs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4M &lt;span class="nv"&gt;status&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;progress &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sync&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Replace &lt;code&gt;/dev/sdX&lt;/code&gt; with the path to the USB drive. Be careful with this&lt;br&gt;
command as it will overwrite the data on the USB drive.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once the image has been successfully flashed to the USB drive, you can proceed to boot the node from it. To do this, you may need to enter the BIOS or UEFI settings and configure the boot order to prioritize the USB drive.&lt;/p&gt;

&lt;p&gt;After the node boots into the Talos installer, you can install Talos onto the system using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--node&lt;/span&gt; x.x.x.x
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command installs Talos on the node. Once the installation is complete, simply reboot the machine-it will now boot directly into Talos. Repeat this process for each node in the cluster to complete the installation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prepare Nodes Config
&lt;/h3&gt;

&lt;p&gt;Once We've downloaded the &lt;code&gt;talosctl&lt;/code&gt; binary, we can use it to generate the initial cluster configuration with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl gen config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command generates a default configuration, which won't fully meet our needs. In the next section, we'll customize it accordingly.&lt;/p&gt;

&lt;p&gt;Running this command produces three files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;talosconfig&lt;/code&gt;: Used by &lt;code&gt;talosctl&lt;/code&gt; to connect to and manage the cluster.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;controlplane.yaml&lt;/code&gt;: Configuration used to bootstrap control plane nodes.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;worker.yaml&lt;/code&gt;: Configuration used to bootstrap worker nodes. We won’t be using this file, since all of our nodes will act as control plane nodes (while still being able to run workloads).&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Patching Nodes Config
&lt;/h4&gt;

&lt;p&gt;To modify the configuration of the control plane nodes (or workers), we could manually edit the generated files. However, a more structured and maintainable approach is to use patches. talosctl supports adding patches to the configuration, which allows us to organize our changes cleanly and consistently-especially useful when managing multiple nodes or environments.&lt;/p&gt;

&lt;p&gt;We'll be applying the following modifications using patches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Allow Control Plane Workloads&lt;/strong&gt;: This will enable workloads to be scheduled on the control plane nodes. Since I want the control plane nodes to also act as worker nodes, this configuration is essential to allow scheduling of workloads on them.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# patches/allow-controlplane-workloads.yaml&lt;/span&gt;
&lt;span class="na"&gt;cluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;allowSchedulingOnControlPlanes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Control Plane Node 1&lt;/strong&gt;: This configuration is specific to the Node 1. It will essentially add a specific hostname, which should make it easily identifiable.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# patches/control-plane-node-1.yaml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
  &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/machine/network/hostname&lt;/span&gt;
  &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;clustarino-k8s-1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Control Plane Node 2&lt;/strong&gt;: Same configuration, but with a different hostname.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# patches/control-plane-node-2.yaml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
  &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/machine/network/hostname&lt;/span&gt;
  &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;clustarino-k8s-2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Control Plane Node 3&lt;/strong&gt;: Same configuration, but with a different hostname.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# patches/control-plane-node-3.yaml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
  &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/machine/network/hostname&lt;/span&gt;
  &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;clustarino-k8s-3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Interface Names&lt;/strong&gt;: Allow the interface IDs to be more easily identifiable:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# patches/interface-names.yaml&lt;/span&gt;
&lt;span class="na"&gt;machine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;install&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;extraKernelArgs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;net.ifnames=0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;DHCP&lt;/strong&gt;: This will enable DHCP on the Ethernet interface - because of the previous config, it will allow the interface to be identified by this name.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# patches/dhcp.yaml&lt;/span&gt;
&lt;span class="na"&gt;machine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;interfaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;interface&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eth0&lt;/span&gt;
        &lt;span class="na"&gt;dhcp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Disable Kubeproxy and CNI&lt;/strong&gt;: This will disable the Kubeproxy and the default CNI that comes with Talos. It will allow us to install our own.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# patches/disable-kube-proxy-and-cni.yaml&lt;/span&gt;
&lt;span class="na"&gt;cluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cni&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;none&lt;/span&gt;
  &lt;span class="na"&gt;proxy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;disabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;DNS&lt;/strong&gt;: In the previous chapter, I enabled DNS on the Custom Router, which is the DNS server we’ll be using here.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Although the DNS address should be automatically assigned via DHCP, I’ll hardcode it in the configuration to allow for easy changes in the future.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# patches/dns.yaml&lt;/span&gt;
&lt;span class="na"&gt;machine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;nameservers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;x.x.x.x&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;NTP&lt;/strong&gt;: The NTP we'll also be hardcoding it. For this one. I'll be using also the one in our Custom Router.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The NTP server should also be automatically assigned via DHCP, but I’ll also hardcode it in the configuration to allow for easy changes in the future.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="c1"&gt;# patches/ntp.yaml&lt;/span&gt;
  &lt;span class="na"&gt;machine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;time&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;disabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;servers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;x.x.x.x&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Disk&lt;/strong&gt;: I’ll be adding an additional disk - a USB stick - which will be used as the main OS disk for Talos.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you have questions about this decision, please refer to &lt;a href="https://dev.to/aanogueira/home-lab-chapter-1-requirements-hardware-software-and-architecture-5225"&gt;Chapter 1&lt;/a&gt; of this series.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# patches/install-disk.yaml&lt;/span&gt;
&lt;span class="na"&gt;machine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;install&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;disk&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/dev/nvme0n1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Metrics Server&lt;/strong&gt;: The Metrics Server is a cluster-wide aggregator of resource usage data, such as CPU and memory consumption. It collects metrics from each node and pod, enabling features like autoscaling and resource monitoring through tools like kubectl top. To enable it, we need to add the following configuration:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# patches/metrics-server.yaml&lt;/span&gt;
&lt;span class="na"&gt;machine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kubelet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;extraArgs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;rotate-server-certificates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="na"&gt;files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;[metrics]&lt;/span&gt;
          &lt;span class="s"&gt;address = "0.0.0.0:11234"&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/cri/conf.d/metrics.toml&lt;/span&gt;
      &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;create&lt;/span&gt;

&lt;span class="na"&gt;cluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;extraManifests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://raw.githubusercontent.com/alex1989hu/kubelet-serving-cert-approver/main/deploy/standalone-install.yaml&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;By default, the certificates used by the Kubelet aren’t recognized by the Metrics Server. To fix this, we need to enable certificate rotation and ensure the Kubelet uses certificates trusted by the Metrics Server. This is important because the Kubelet is the agent running on each node that manages pods and reports resource usage to the control plane. Secure communication between the Kubelet and the Metrics Server relies on trusted certificates, so enabling certificate rotation helps keep these credentials up-to-date and accepted, ensuring reliable metrics collection and cluster security.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VIP (Virtual IP)&lt;/strong&gt;: Since I’ll be using multiple nodes, I need a simple way to connect to both the services we’re hosting and the cluster itself. To achieve this, we can configure a VIP (Virtual IP) - an IP address shared across the nodes. This allows clients to reach the cluster through a single, stable address regardless of which node is handling the request.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# patches/vip.yaml&lt;/span&gt;
&lt;span class="na"&gt;machine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;interfaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;interface&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eth0&lt;/span&gt;
        &lt;span class="na"&gt;vip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;x.x.x.105&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Generating secrets
&lt;/h4&gt;

&lt;p&gt;With all these patches defined, the only remaining step is to generate the secrets needed for the control planes to communicate securely with each other, for new nodes to join the cluster, and for clients - like &lt;code&gt;kubectl&lt;/code&gt; - to connect.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;talosctl&lt;/code&gt; includes a utility to generate these secrets. You can create them by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl gen secrets &lt;span class="nt"&gt;--output-file&lt;/span&gt; outputs/secrets.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Generating final Nodes Config
&lt;/h4&gt;

&lt;p&gt;Now that we have all the patches and secrets defined, we can generate the final configuration for the nodes. This is done by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl gen config clustarino https://x.x.x.105:6443 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--with-secrets&lt;/span&gt; outputs/secrets.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--config-patch&lt;/span&gt; @patches/allow-controlplane-workloads.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--config-patch&lt;/span&gt; @patches/dhcp.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--config-patch&lt;/span&gt; @patches/disable-kube-proxy-and-cni.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--config-patch&lt;/span&gt; @patches/install-disk.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--config-patch&lt;/span&gt; @patches/interface-names.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--config-patch&lt;/span&gt; @patches/metrics-server.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--config-patch&lt;/span&gt; @patches/ntp.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--config-patch-control-plane&lt;/span&gt; @patches/vip.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--output&lt;/span&gt; rendered/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will output the same three files mentioned earlier, but now they will include all of our additional configurations.&lt;/p&gt;

&lt;p&gt;Since we need to provide some node-specific configuration as well, we also have to run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl machineconfig patch &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--patch&lt;/span&gt; @patches/control-plane-node-1.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    rendered/controlplane.yaml | yq - &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; nodes/control-plane-node-1.yaml

talosctl machineconfig patch &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--patch&lt;/span&gt; @patches/control-plane-node-2.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    rendered/controlplane.yaml | yq - &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; nodes/control-plane-node-2.yaml

talosctl machineconfig patch &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--patch&lt;/span&gt; @patches/control-plane-node-3.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    rendered/controlplane.yaml | yq - &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; nodes/control-plane-node-3.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Applying config to the Nodes
&lt;/h4&gt;

&lt;p&gt;With all the configuration generated, we can now apply it to each node in the cluster. But first, we need to identify the IP address of each node. Although we’re using DHCP, we can still assign static IP addresses by configuring DHCP leases.&lt;/p&gt;

&lt;p&gt;To do this, navigate to the leases configuration section in OPNSense and set up the following static mappings:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechquests.dev%2F_app%2Fimmutable%2Fassets%2Fleases.0Dj2m1kS.avif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftechquests.dev%2F_app%2Fimmutable%2Fassets%2Fleases.0Dj2m1kS.avif" alt="DMZ leases" width="1146" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will make it easy to identify each machine. So now we can apply the&lt;br&gt;
configuration by simply typing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nodes/control-plane-node-1.yaml &lt;span class="nt"&gt;--node&lt;/span&gt; x.x.x.101 &lt;span class="nt"&gt;--insecure&lt;/span&gt;
talosctl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nodes/control-plane-node-2.yaml &lt;span class="nt"&gt;--node&lt;/span&gt; x.x.x.102 &lt;span class="nt"&gt;--insecure&lt;/span&gt;
talosctl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nodes/control-plane-node-3.yaml &lt;span class="nt"&gt;--node&lt;/span&gt; x.x.x.103 &lt;span class="nt"&gt;--insecure&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Bootstrapping the Cluster
&lt;/h4&gt;

&lt;p&gt;With all the configuration in place, it’s finally time to bootstrap the cluster. First, we need to specify the cluster endpoints by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl config endpoint x.x.x.101 x.x.x.102 x.x.x.103
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And now, the moment we’ve been waiting for - to start the bootstrapping process, run the following command for each node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl bootstrap &lt;span class="nt"&gt;--node&lt;/span&gt; x.x.x.x
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;This command needs to be run against one of the control plane nodes. Since all our nodes serve as control plane nodes, you can run it on any of them.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We can monitor the bootstrap progress by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl dashboard &lt;span class="nt"&gt;--node&lt;/span&gt; x.x.x.x
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will open a dashboard where you can view real-time logs and track the status of the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting to the Cluster
&lt;/h2&gt;

&lt;p&gt;Now that our configuration is ready, we can finally apply it. As mentioned earlier, when generating the config, we obtained a file named &lt;code&gt;talosconfig&lt;/code&gt;, which provides &lt;code&gt;talosctl&lt;/code&gt; with the necessary context to interact with our newly created cluster.&lt;/p&gt;

&lt;p&gt;You can place this file in the default Talos config location (&lt;code&gt;~/.talos/config&lt;/code&gt;), or alternatively, set the &lt;code&gt;TALOSCONFIG&lt;/code&gt; environment variable to point to its path. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# In our case, it will be in the `rendered` folder&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TALOSCONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;./rendered/talosconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, to generate a kubeconfig file for accessing the Kubernetes cluster, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl kubeconfig &lt;span class="nt"&gt;--node&lt;/span&gt; x.x.x.x
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To validate the connection, we can run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We should see the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME               STATUS      ROLES           AGE     VERSION
clustarino-k8s-1   Not Ready   control-plane   2m14s   v1.30.1
clustarino-k8s-2   Not Ready   control-plane   2m16s   v1.30.1
clustarino-k8s-3   Not Ready   control-plane   2m2s    v1.30.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;The nodes will be in the &lt;code&gt;Not Ready&lt;/code&gt; state until we install a CNI (Control Network Interface).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Adding CNI
&lt;/h2&gt;

&lt;p&gt;Since we disabled the default CNI that comes with Talos, we need to install our own. For this, we'll be using Cilium. Cilium is an open-source software that provides transparent, secure networking between application services deployed on container platforms like Docker and Kubernetes.&lt;/p&gt;

&lt;p&gt;The main reason I chose Cilium is simply that I wanted to try it out. I’ve been using GKE in my day-to-day work and wanted to explore Cilium’s capabilities in a more controlled environment.&lt;/p&gt;

&lt;p&gt;To install Cilium, we can run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Add the repository&lt;/span&gt;
helm repo add cilium https://helm.cilium.io/
helm repo update

&lt;span class="c"&gt;# Install the chart&lt;/span&gt;
helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; cilium cilium/cilium &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--namespace&lt;/span&gt; kube-system &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; ipam.mode&lt;span class="o"&gt;=&lt;/span&gt;kubernetes &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; hostFirewall.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; hubble.relay.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; hubble.ui.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;kubeProxyReplacement&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; securityContext.capabilities.ciliumAgent&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{CHOWN,KILL,NET_ADMIN,NET_RAW,IPC_LOCK,SYS_ADMIN,SYS_RESOURCE,DAC_OVERRIDE,FOWNER,SETGID,SETUID}"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; securityContext.capabilities.cleanCiliumState&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{NET_ADMIN,SYS_ADMIN,SYS_RESOURCE}"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; cgroup.autoMount.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; cgroup.hostRoot&lt;span class="o"&gt;=&lt;/span&gt;/sys/fs/cgroup &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;k8sServiceHost&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;localhost &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;k8sServicePort&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;7445
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I chose to use Helm to install Cilium because, in my opinion, it’s the easiest method and is officially maintained by the Cilium team. Helm also allows us to deploy Hubble, a powerful network observability tool built into Cilium.&lt;/p&gt;

&lt;p&gt;The command above installs Cilium with the following configuration options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ipam.mode=kubernetes&lt;/code&gt;: Enables Cilium to use Kubernetes’ IP Address Management (IPAM) for assigning pod IPs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;hostFirewall.enabled=true&lt;/code&gt;: Activates the host firewall within Cilium for enhanced security.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;hubble.relay.enabled=true&lt;/code&gt;: Enables the Hubble relay component.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;hubble.ui.enabled=true&lt;/code&gt;: Enables the Hubble UI for network observability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;kubeProxyReplacement=true&lt;/code&gt;: Replaces the default kube-proxy with Cilium’s implementation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;securityContext.capabilities.ciliumAgent&lt;/code&gt;: Sets specific capabilities for the Cilium agent.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;securityContext.capabilities.cleanCiliumState&lt;/code&gt;: Sets capabilities to clean up Cilium state when needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;cgroup.autoMount.enabled=false&lt;/code&gt;: Disables automatic mounting of cgroups.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;cgroup.hostRoot=/sys/fs/cgroup&lt;/code&gt;: Specifies the host root directory for cgroups.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;k8sServiceHost=localhost&lt;/code&gt;: Sets the Kubernetes API server host to localhost.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;k8sServicePort=7445&lt;/code&gt;: Sets the Kubernetes API server port.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the installation completes, you can verify the status of the Cilium pods by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Network Policies
&lt;/h2&gt;

&lt;p&gt;Now that Cilium is installed, we can add some network policies to control and allow traffic between the nodes. To do this, create a &lt;code&gt;network-policies.yaml&lt;/code&gt; file with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cilium.io/v2&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CiliumClusterwideNetworkPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;host-fw-control-plane&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;control-plane&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;specific&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;access&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;rules.'&lt;/span&gt;
  &lt;span class="na"&gt;nodeSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;node-role.kubernetes.io/control-plane&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;
  &lt;span class="na"&gt;ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Allow access to kube api from anywhere.&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;fromEntities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;world&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cluster&lt;/span&gt;
      &lt;span class="na"&gt;toPorts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;6443'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TCP'&lt;/span&gt;

    &lt;span class="c1"&gt;# Allow access to talos from anywhere.&lt;/span&gt;
    &lt;span class="c1"&gt;# https://www.talos.dev/v1.10/learn-more/talos-network-connectivity/&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;fromEntities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;world&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cluster&lt;/span&gt;
      &lt;span class="na"&gt;toPorts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;50000'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TCP'&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;50001'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TCP'&lt;/span&gt;

    &lt;span class="c1"&gt;# Allow kube-proxy-replacement from kube-apiserver.&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;fromEntities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;kube-apiserver&lt;/span&gt;
      &lt;span class="na"&gt;toPorts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;10250'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TCP'&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;4244'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TCP'&lt;/span&gt;

    &lt;span class="c1"&gt;# Allow access from hubble-relay to hubble-peer (running on the node).&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;fromEndpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;k8s-app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hubble-relay&lt;/span&gt;
      &lt;span class="na"&gt;toPorts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;4244'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TCP'&lt;/span&gt;

      &lt;span class="c1"&gt;# Allow metrics-server to scrape.&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;fromEndpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;k8s-app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metrics-server&lt;/span&gt;
      &lt;span class="na"&gt;toPorts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;10250'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TCP'&lt;/span&gt;

    &lt;span class="c1"&gt;# Allow ICMP Ping from/to anywhere.&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;icmps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;fields&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8&lt;/span&gt;
              &lt;span class="na"&gt;family&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IPv4&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;128&lt;/span&gt;
              &lt;span class="na"&gt;family&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IPv6&lt;/span&gt;

    &lt;span class="c1"&gt;# Allow cilium tunnel/health checks from other nodes.&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;fromEntities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;remote-node&lt;/span&gt;
      &lt;span class="na"&gt;toPorts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;8472'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;UDP'&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;4240'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TCP'&lt;/span&gt;

    &lt;span class="c1"&gt;# Allow access to etcd and api from other nodes.&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;fromEntities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;remote-node&lt;/span&gt;
      &lt;span class="na"&gt;toPorts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2379'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TCP'&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2380'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TCP'&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;51871'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;UDP'&lt;/span&gt;

    &lt;span class="c1"&gt;# Allow access to etcd and api from unconfigured nodes.&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;fromCIDR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;x.x.x.101/32&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;x.x.x.102/32&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;x.x.x.103/32&lt;/span&gt;
      &lt;span class="na"&gt;toPorts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2379'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TCP'&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2380'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TCP'&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;51871'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;UDP'&lt;/span&gt;

    &lt;span class="c1"&gt;# Allow HTTP and HTTPS access from anywhere.&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;fromEntities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;world&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cluster&lt;/span&gt;
      &lt;span class="na"&gt;toPorts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;80'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TCP'&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;443'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TCP'&lt;/span&gt;

    &lt;span class="c1"&gt;# Allow access from inside the cluster to the admission controller.&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;fromEntities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cluster&lt;/span&gt;
      &lt;span class="na"&gt;toPorts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;8443'&lt;/span&gt;
              &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TCP'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allow access to the Kubernetes API server from anywhere.&lt;/li&gt;
&lt;li&gt;Allow access to Talos OS management ports from anywhere.&lt;/li&gt;
&lt;li&gt;Allow kube-apiserver to communicate with kubelet and Cilium agent (kube-proxy replacement).&lt;/li&gt;
&lt;li&gt;Allow Hubble relay pods to communicate with Hubble peers running on the nodes.&lt;/li&gt;
&lt;li&gt;Allow metrics-server to scrape kubelet metrics for monitoring.&lt;/li&gt;
&lt;li&gt;Allow ICMP Echo Request (ping) from/to anywhere for network diagnostics.&lt;/li&gt;
&lt;li&gt;Allow Cilium overlay networking (VXLAN/UDP tunnels) and health checks between cluster nodes.&lt;/li&gt;
&lt;li&gt;Allow etcd communication and API access between cluster nodes.&lt;/li&gt;
&lt;li&gt;Allow etcd and API access from specific unconfigured node IP addresses.&lt;/li&gt;
&lt;li&gt;Allow public HTTP (port 80) and HTTPS (port 443) access to services on the nodes.&lt;/li&gt;
&lt;li&gt;Allow intra-cluster traffic to access the Kubernetes admission controller on port 8443.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can apply this configuration by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; network-policies.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will apply the network policies to the cluster. After applying them, we can check the status of the nodes with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything is set up correctly, you should see the nodes in the &lt;code&gt;Ready&lt;/code&gt; state, indicating they are healthy and fully functional within the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This concludes the setup of the Kubernetes cluster. We have successfully bootstrapped the cluster and installed Cilium as the CNI. With this, the base setup of the Kubernetes cluster is complete.&lt;/p&gt;

&lt;p&gt;While the cluster is now up and running, there are still a few components missing that will allow us to expose services outside the cluster. In the next chapter, we will walk through the setup of the Ingress Controller, which will enable external access to the services hosted within the cluster.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://techquests.dev" rel="noopener noreferrer"&gt;https://techquests.dev&lt;/a&gt; on May 15, 2025.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>homelab</category>
      <category>selfhost</category>
      <category>cloud</category>
      <category>platform</category>
    </item>
    <item>
      <title>Home Lab: Chapter 2 — Base Foundations</title>
      <dc:creator>Andre Nogueira</dc:creator>
      <pubDate>Fri, 25 Apr 2025 15:52:28 +0000</pubDate>
      <link>https://forem.com/aanogueira/home-lab-chapter-2-base-foundations-pid</link>
      <guid>https://forem.com/aanogueira/home-lab-chapter-2-base-foundations-pid</guid>
      <description>&lt;p&gt;Howdy!&lt;/p&gt;

&lt;p&gt;Everything needs a base to be built on top of. Nothing can be done out of the blue. Take a house for example: it needs a blueprint, then we need to start building the foundation to then build on top of. This is what I'll be tackling in this part: our baseline Infrastructure, our network layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Network
&lt;/h2&gt;

&lt;p&gt;This chapter focuses on the actual network setup of my homelab, specifically excluding Kubernetes' internal networks or any network layers created later. This is the network that will enable me to connect to my homelab from my desktop and host the services I desire. My network consists of two main components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Wifi Network&lt;/strong&gt;: This is the primary network of my home, provided by my ISP's router, connecting all of my domestic devices. Due to the location of my homelab, I need to extend this network to reach my devices' location.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Custom Router&lt;/strong&gt;: I’m configuring this router to have four network zones: WAN, LAN, DMZ, and VPN. This setup gives me full control over my network and allows me to manage the traffic between them.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Wifi Network
&lt;/h2&gt;

&lt;p&gt;As I mentioned in the previous chapter, I need to extend the network from my router to the location of my devices, while also augmenting the Wi-Fi coverage throughout the rest of the house.&lt;/p&gt;

&lt;p&gt;This was a straightforward process: plug and play. I simply connected the main unit to a power outlet, inserted an Ethernet cable from my home router into it, and placed one of the receivers near my custom router. Then, I downloaded the &lt;a href="https://www.devolo.global/home-network-app" rel="noopener noreferrer"&gt;Devolo application&lt;/a&gt; for additional configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Enabled 5G Wi-Fi only, as all my devices can connect to this network, eliminating the need for a 2.4G network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set the same SSID (Service Set Identifier, the name of my network) as my home network to extend coverage throughout the house.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Renamed the devices for easier identification:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Powerline 1 -&amp;gt; PL-Router&lt;/li&gt;
&lt;li&gt;Powerline 2 -&amp;gt; PL-Office&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Custom Router
&lt;/h2&gt;

&lt;p&gt;OPNsense was the elected OS (Operating System) for my router. With four ports on my router, I’ll utilize each physical interface for a dedicated network:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lan port 1: &lt;strong&gt;WAN&lt;/strong&gt; - facilitating my router's internet connection.&lt;/li&gt;
&lt;li&gt;Lan port 2: &lt;strong&gt;LAN&lt;/strong&gt; - enabling my desktop's connection to the router.&lt;/li&gt;
&lt;li&gt;Lan port 3: &lt;strong&gt;DMZ&lt;/strong&gt; - hosting all my primary home services.&lt;/li&gt;
&lt;li&gt;Lan port 4: &lt;strong&gt;VPN&lt;/strong&gt; - housing the VPN server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup achieves physical network separation, each with its own configuration.&lt;/p&gt;

&lt;p&gt;The installation is straightforward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download ISO of OPNsense from
&lt;a href="https://opnsense.org/download/" rel="noopener noreferrer"&gt;OPNSense Website&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Create a &lt;em&gt;bootable&lt;/em&gt; image on a thumb drive&lt;/li&gt;
&lt;li&gt;Boot machine from the thumb drive&lt;/li&gt;
&lt;li&gt;Follow the installation wizard

&lt;ul&gt;
&lt;li&gt;Assigned the initial WAN network (PL-Office -&amp;gt; WAN Interface)&lt;/li&gt;
&lt;li&gt;Assigned the initial LAN network (Desktop -&amp;gt; LAN Interface) - this will also serve as the management interface, exposing the OPNsense web dashboard.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Once installed, I accessed the OPNsense web dashboard from my desktop using the machine's IP address - later, a DNS (Domain Name System) record will be created to avoid memorizing all IP addresses. I then created additional interfaces by assigning &lt;strong&gt;LAN&lt;/strong&gt; ports 3 and 4 to &lt;strong&gt;DMZ&lt;/strong&gt; and &lt;strong&gt;VPN&lt;/strong&gt;, respectively.&lt;/p&gt;

&lt;h3&gt;
  
  
  WAN
&lt;/h3&gt;

&lt;p&gt;WAN (Wide Area Network) typically refers to the interface used for internet access. In the case of my custom router, it serves exactly that role - providing connectivity between the router (and all connected devices) and the internet. The network setup involves a direct connection between just two devices: the router and a powerline adapter. However, since the adapter operates in bridge mode and uses the same IP range as the router, it effectively allows seamless communication between the router and any device on my Wi-Fi network.&lt;/p&gt;

&lt;h3&gt;
  
  
  LAN
&lt;/h3&gt;

&lt;p&gt;A LAN (Local Area Network) typically describes a network within a home or organization. It is generally private, contrasting the public WAN. Here, the LAN network will connect to a single device initially. Though considered naming this interface Management, reflecting its current purpose (connecting my desktop to the router, later accessing the DMZ), I chose LAN as it is a broader name to accommodate potential future device additions.&lt;/p&gt;

&lt;h3&gt;
  
  
  DMZ
&lt;/h3&gt;

&lt;p&gt;A DMZ (Demilitarized Zone) network sits between the LAN and WAN, used to host services requiring internet access without exposing the LAN. While my homelab’s DMZ aims to host internet-accessible services, it will also allow access from the LAN, maintaining separation. Hence, I opted to create a DMZ network.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Although I'll maintain LAN access from the DMZ, devices on the Wi-Fi network won’t reach the DMZ&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This network will host my primary home services and core homelab infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  VPN
&lt;/h3&gt;

&lt;p&gt;A VPN (Virtual Private Network) allows secure access to private networks through public internet connections. In my setup, I'm using a Raspberry Pi as the VPN server to enable global access to my homelab, as long as I have internet connectivity - obviously.&lt;/p&gt;

&lt;p&gt;Since I'll be using only one device on this network, I only require two IP addresses - one for the Wi-Fi interface and another for the Ethernet interface, which connects to the VPN network. All other devices will access the VPN remotely, beginning with the Wi-Fi interface and then transitioning to the Ethernet interface for VPN network connections.&lt;/p&gt;

&lt;h2&gt;
  
  
  DNS
&lt;/h2&gt;

&lt;p&gt;To simplify service access, I’ll create a DNS server for name resolution, allowing me to access my services by using the defined names instead of their IP addresses.&lt;/p&gt;

&lt;p&gt;I've chosen Unbound as my DNS server because it’s easy to use and allows me to create the DNS records I need. It also works well with OPNsense, making DNS record management simpler. I'll use this DNS as the resolver for all my devices, letting me access my services by their names and resolving external domains too.&lt;/p&gt;

&lt;p&gt;The installation is straight forward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access the OPNsense web dashboard.&lt;/li&gt;
&lt;li&gt;Navigate to System -&amp;gt; Settings -&amp;gt; General.&lt;/li&gt;
&lt;li&gt;Find the DNS section and enable the DNS Resolver.&lt;/li&gt;
&lt;li&gt;Save changes and add DNS records.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is what the final configuration looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Few21e6widsfjmw5kuxmi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Few21e6widsfjmw5kuxmi.png" alt="DNS Config" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting Networks
&lt;/h2&gt;

&lt;p&gt;With all networks established, firewall configuration requires adjustment to&lt;br&gt;
allow communication between the ones that I need to cross-connect:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LAN -&amp;gt; DMZ&lt;/li&gt;
&lt;li&gt;LAN -&amp;gt; WAN&lt;/li&gt;
&lt;li&gt;VPN -&amp;gt; DMZ&lt;/li&gt;
&lt;li&gt;VPN -&amp;gt; WAN&lt;/li&gt;
&lt;li&gt;DMZ -&amp;gt; LAN&lt;/li&gt;
&lt;li&gt;DMZ -&amp;gt; WAN&lt;/li&gt;
&lt;li&gt;DMZ -&amp;gt; VPN&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Essentially, the goal is to allow LAN connections to access the internet and DMZ for local infrastructure access. VPN will require similar configurations. Bi direcional communication must be enabled for these connections.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This configuration might be too broad but it will be enough for an initial setup. I may revisit it in configuration.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This concludes the foundational infrastructure setup - My Homelab's Network. I covered the various networks established, outlined requirements, and detailed configurations. While broad in scope, it serves as an excellent starting point for future enhancements.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://techquests.dev" rel="noopener noreferrer"&gt;https://techquests.dev&lt;/a&gt; on April 25, 2025.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>homelab</category>
      <category>selfhost</category>
      <category>cloud</category>
      <category>platform</category>
    </item>
    <item>
      <title>Home Lab: Chapter 1 — Requirements, Hardware, Software and Architecture</title>
      <dc:creator>Andre Nogueira</dc:creator>
      <pubDate>Sun, 13 Apr 2025 23:27:04 +0000</pubDate>
      <link>https://forem.com/aanogueira/home-lab-chapter-1-requirements-hardware-software-and-architecture-5225</link>
      <guid>https://forem.com/aanogueira/home-lab-chapter-1-requirements-hardware-software-and-architecture-5225</guid>
      <description>&lt;p&gt;Howdy!&lt;/p&gt;

&lt;p&gt;Welcome to the first quest I'll be tackling! This one might take a while to complete, as it will serve as the foundation for everything that follows - my &lt;strong&gt;Home Lab&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I've had the idea of creating a Home Lab for quite some time now - a place to test things, learn new technologies, and essentially just have some fun.&lt;/p&gt;

&lt;p&gt;While exploring similar projects, I noticed a common theme: most people build a Home Lab to self-host services they use - NAS servers, media servers, VPNs, etc. While I do plan to host some of those too, my main focus is on building my own applications and services, not just hosting them. I want to recreate what you'd typically find with on a cloud provider or in a software development company - a platform that supports development.&lt;/p&gt;

&lt;p&gt;My goal is to create a developer-focused platform where building, testing, deploying, and monitoring applications is the core.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This will be extremely opinionated, based on my experience. I’ll be using tools I’m familiar with, as well as exploring others I want to learn.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;Let's start with some requirements I've gathered for my setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Orchestration
&lt;/h3&gt;

&lt;p&gt;For this project, I want to use Kubernetes as the main orchestrator. I've been using Kubernetes for quite a while now and, while Docker might be a better approach for home use, I want my setup to be fairly close to what you would see in a real production environment. After all, I expect to host some of my own projects, one of them being this blog.&lt;/p&gt;

&lt;p&gt;So, if you're reading this, it means that I've already set up my Home Lab, and some of the requirements that we'll be mentioning have already been met.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Many have given up on self-hosting Kubernetes for something simpler. I might follow that same path, as warned by some of my colleagues. But until then, let's continue.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As I plan to use Kubernetes, I'll need at least 3 nodes. After some research, I decided to use 3 mini computers since I wanted to run the nodes on actual hardware instead of virtual machines. Initially, I considered using Raspberry Pis, but many of the tools I plan on using do not support the &lt;code&gt;arm64&lt;/code&gt; architecture. Additionally, most enterprise applications run on &lt;code&gt;amd64&lt;/code&gt;, so it made sense to stick with that to keep my setup as close to real-world scenarios as possible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Network
&lt;/h3&gt;

&lt;p&gt;For the internal network, I wanted to separate it from my home network. That meant I needed a router. Not just to create separation, but also to explore networking configurations and have a physical distinction between &lt;em&gt;home&lt;/em&gt; and &lt;em&gt;development&lt;/em&gt; environments.&lt;/p&gt;

&lt;p&gt;To achieve this setup, I chose a mini computer equipped with multiple&lt;br&gt;
&lt;strong&gt;Ethernet&lt;/strong&gt; ports to function as a router. This device will handle the connection between my home router and all devices within the DMZ (Demilitarized Zone) network. Implementing a &lt;strong&gt;DMZ&lt;/strong&gt; provides an additional layer of security by isolating internal resources from direct external access.&lt;/p&gt;

&lt;p&gt;To make future expansion easier, I added a switch to the internal network. This will allow me to connect more nodes or devices to the DMZ network as needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage
&lt;/h3&gt;

&lt;p&gt;Storage is a crucial requirement, so I added a NAS server. It stores backups for stateful applications (e.g., databases), my personal data, and serves as a self-hosted alternative to services like iCloud, Google Drive, and Dropbox.&lt;/p&gt;

&lt;p&gt;Given the above, here is the list of hardware I’ll be using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 Mini computer with additional LAN ports (router)&lt;/li&gt;
&lt;li&gt;1 Switch (for the DMZ network)&lt;/li&gt;
&lt;li&gt;3 Mini computers (K8s nodes)&lt;/li&gt;
&lt;li&gt;1 Computer (NAS)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Hardware
&lt;/h2&gt;

&lt;p&gt;With the plan in place, it was time to start doing some research.&lt;/p&gt;

&lt;h3&gt;
  
  
  Router
&lt;/h3&gt;

&lt;p&gt;As mentioned, I wanted a mini computer with extra ports for the router. I've worked with Cisco routers and pfSense in the past, but for this setup, I wanted to try something new. After some research, I went with OPNsense - an open-source firewall/router software that's a fork of pfSense.&lt;/p&gt;

&lt;p&gt;I found a great mini PC on AliExpress:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;4 LAN ports&lt;/li&gt;
&lt;li&gt;Intel i3 N305 (8 cores / 8 threads)&lt;/li&gt;
&lt;li&gt;32GB RAM&lt;/li&gt;
&lt;li&gt;1TB NVMe SSD&lt;/li&gt;
&lt;li&gt;Passive cooling&lt;/li&gt;
&lt;li&gt;Solid build quality&lt;/li&gt;
&lt;li&gt;~450€&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Powerline
&lt;/h3&gt;

&lt;p&gt;My office is far from the router, and I wanted a wired connection. Drilling through walls and running long cables weren’t appealing, and I didn’t want to use Wi-Fi extenders. That left me with Powerline adapters.&lt;/p&gt;

&lt;p&gt;Powerline devices uses the electrical wiring to transmit network signals. I chose the Devolo Magic 2, which extends Wi-Fi coverage and also provides&lt;br&gt;
Ethernet ports. While the speed droped quite a bit, from &lt;code&gt;~500 Mbps&lt;/code&gt; to &lt;code&gt;~150 Mbps&lt;/code&gt;, this is still acceptable for my needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Switch
&lt;/h3&gt;

&lt;p&gt;For the switch, I reused a 5-port model I had lying around (&lt;a href="https://www.amazon.com/TP-Link-Ethernet-Splitter-Unmanaged-TL-SF1005D/dp/B000FNFSPY?th=1" rel="noopener noreferrer"&gt;TP-Link TL-SF1005D&lt;/a&gt;) It’s quiet,compact, and fits my current requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Nodes
&lt;/h3&gt;

&lt;p&gt;I wanted the nodes to be quiet, compact, and performant. I found 3 mini PCs with the following specs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ryzen 7 5700U (8 cores / 16 threads)&lt;/li&gt;
&lt;li&gt;16GB RAM (upgradable to 32GB)&lt;/li&gt;
&lt;li&gt;512GB SSD&lt;/li&gt;
&lt;li&gt;Passive cooling&lt;/li&gt;
&lt;li&gt;Good build quality&lt;/li&gt;
&lt;li&gt;~250€&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They came with two M.2 slots, and I wanted to separate the OS from the data. So, I added three Kingston DataTraveler Exodia thumb drives (one per node) to boot the OS while keeping the M.2 SSDs for local node storage.&lt;/p&gt;

&lt;h3&gt;
  
  
  NAS
&lt;/h3&gt;

&lt;p&gt;Long-term storage has always been something I needed. If you just want reliability, I’d recommend a cloud provider like Google Drive or a prebuilt NAS from Synology or QNAP. But I’m here to learn, so I went DIY.&lt;/p&gt;

&lt;p&gt;I found a second-hand custom-built NAS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;5 bays (no drives included)&lt;/li&gt;
&lt;li&gt;Intel N5000 (4 cores / 4 threads)&lt;/li&gt;
&lt;li&gt;64GB ECC RAM&lt;/li&gt;
&lt;li&gt;Passive cooling&lt;/li&gt;
&lt;li&gt;Good build quality&lt;/li&gt;
&lt;li&gt;400€&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Software
&lt;/h2&gt;

&lt;p&gt;With the hardware ready, it was time to look at the software stack.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wireguard
&lt;/h3&gt;

&lt;p&gt;I wanted to access the DMZ network remotely, just like I would with a cloud provider. I chose WireGuard as the VPN solution. It’s lightweight, fast, and secure.&lt;/p&gt;

&lt;p&gt;I'll be hosting it on a &lt;a href="https://www.raspberrypi.com/products/raspberry-pi-4-model-b/" rel="noopener noreferrer"&gt;Raspberry Pi 4B&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  OPNSense
&lt;/h3&gt;

&lt;p&gt;OPNsense is not just a router - it’s a flexible firewall platform with plugin support. I’ll use it alongside Unbound DNS to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Assign hostnames to devices&lt;/li&gt;
&lt;li&gt;Set up a local DNS server&lt;/li&gt;
&lt;li&gt;Define static IPs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup offers much more flexibility than the router provided by my ISP.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes
&lt;/h3&gt;

&lt;p&gt;Kubernetes is the orchestrator of choice. To run it, I'm using Talos - a minimal OS built specifically for Kubernetes. Talos aligns perfectly with my goals: it's secure, immutable, and easy to manage.&lt;/p&gt;

&lt;p&gt;Kubernetes alone isn’t enough, so I’ll be adding these tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Networking&lt;/strong&gt;: Cilium for pod-to-pod communication and network policies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ingress&lt;/strong&gt;: NGINX Ingress Controller for external access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage&lt;/strong&gt;: Ceph for persistent volumes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Certificates&lt;/strong&gt;: Cert Manager to automate certificate management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metrics &amp;amp; Logs&lt;/strong&gt;: Talos has built-in metrics support, which I’ll use for monitoring.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD&lt;/strong&gt;: ArgoCD for GitOps-based deployments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This will be the base stack. More tools will be added as the platform evolves.&lt;/p&gt;

&lt;h2&gt;
  
  
  Homelab Overview
&lt;/h2&gt;

&lt;p&gt;To better understand the components that we'll be using, here is a diagram of the components that we'll be using:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jf4u03p7dbmpj0wkyoy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jf4u03p7dbmpj0wkyoy.png" alt="Home Lab diagram" width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;Here’s a rough overview of the network setup (read right to left):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ISP&lt;/strong&gt; connects to the &lt;strong&gt;Home Router&lt;/strong&gt; (standard consumer-grade router).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Powerline Adapter&lt;/strong&gt; connects the &lt;strong&gt;Home Router&lt;/strong&gt; to the &lt;strong&gt;Custom Router&lt;/strong&gt; in my office.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Router&lt;/strong&gt; manages the &lt;strong&gt;DMZ network&lt;/strong&gt;, where all critical infrastructure lives.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Raspberry Pi&lt;/strong&gt; runs the &lt;strong&gt;VPN server&lt;/strong&gt; to allow remote access to the &lt;strong&gt;DMZ&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Network Breakdown
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Wi-Fi Network&lt;/strong&gt;: Main home network. Used by everyday devices not part of the DMZ.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WAN Network&lt;/strong&gt;: Connects the &lt;strong&gt;Home Router&lt;/strong&gt; to the Custom Router. Provides internet access to all custom networks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DMZ Network&lt;/strong&gt;: Hosts the Kubernetes nodes, NAS, and future services. Managed by the Custom Router.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VPN Network&lt;/strong&gt;: Contains the Raspberry Pi and allows external access into the DMZ.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LAN Network&lt;/strong&gt;: Contains the desktop PC. It didn’t make sense to have it on Wi-Fi, so it gets its own segment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the initial setup. It’s designed to be modular and expandable over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This wraps up the first chapter of my Home Lab series. We covered the motivations and requirements behind the project, explored the hardware and software choices, and reviewed the overall architecture of the platform I’m aiming to build.&lt;/p&gt;

&lt;p&gt;I'm really excited to start building this, as I feel like this will be a great learning experience - not just in terms of configuration, but also in exploring, architecture, planning, implementation, and of course, documenting and sharing.&lt;/p&gt;

&lt;p&gt;I hope you’ll enjoy this journey as much as I will. Stay tuned for the next chapter! And if you have any suggestions, feel free to reach out.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://techquests.dev" rel="noopener noreferrer"&gt;https://techquests.dev&lt;/a&gt; on April 11, 2025.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>homelab</category>
      <category>selfhost</category>
      <category>cloud</category>
      <category>platform</category>
    </item>
  </channel>
</rss>
