<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Otobong Edoho</title>
    <description>The latest articles on Forem by Otobong Edoho (@otobong_edoho_7796fec1f41).</description>
    <link>https://forem.com/otobong_edoho_7796fec1f41</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/otobong_edoho_7796fec1f41"/>
    <language>en</language>
    <item>
      <title>Kubernetes Networking Deep Dive — Part 2: MetalLB, Nginx Ingress, and NetworkPolicy</title>
      <dc:creator>Otobong Edoho</dc:creator>
      <pubDate>Wed, 13 May 2026 10:50:46 +0000</pubDate>
      <link>https://forem.com/otobong_edoho_7796fec1f41/kubernetes-networking-deep-dive-part-2-metallb-nginx-ingress-and-networkpolicy-1lfo</link>
      <guid>https://forem.com/otobong_edoho_7796fec1f41/kubernetes-networking-deep-dive-part-2-metallb-nginx-ingress-and-networkpolicy-1lfo</guid>
      <description>&lt;p&gt;&lt;em&gt;Part 2 of 2 — Replacing NodePort with real LoadBalancer IPs, routing traffic through an Ingress controller with clean hostnames, and locking down pod-to-pod communication with NetworkPolicy. All on bare metal. All verified with real tests.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Series navigation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Part 1:&lt;/strong&gt; Cluster setup with kubeadm, foundational workloads, deploying a full-stack app with ConfigMaps, Secrets, StatefulSets, and CI pipelines → Read Part 1
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Part 2 (you are here):&lt;/strong&gt; MetalLB, Nginx Ingress Controller, clean hostnames, and NetworkPolicy enforcement with real tests&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Full source code:&lt;/strong&gt; All Kubernetes manifests referenced in this article are available at&lt;br&gt;
&lt;strong&gt;&lt;a href="https://github.com/otie16/k8s-homelab-vm-project.git" rel="noopener noreferrer"&gt;github.com/otie16/k8s-homelab-vm-project&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;If you followed Part 1, you have a working two-node Kubernetes cluster running a full-stack application — Next.js frontend, Django REST API, and PostgreSQL. You can reach it at &lt;code&gt;192.168.1.100:30001&lt;/code&gt; and &lt;code&gt;192.168.1.100:30000&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;That works. But it's not how production Kubernetes is meant to work.&lt;/p&gt;

&lt;p&gt;In production, you don't tell people "hit port 30247 on any node IP." You give them &lt;code&gt;https://app.yourcompany.com&lt;/code&gt;. Traffic enters through a single controlled gateway, gets routed to the right service, and never exposes internal cluster topology.&lt;/p&gt;

&lt;p&gt;This is what Part 2 builds. By the end you'll have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MetalLB&lt;/strong&gt; giving your bare metal cluster real LoadBalancer IPs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nginx Ingress Controller&lt;/strong&gt; as the single entry point for all HTTP traffic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clean hostnames&lt;/strong&gt; — &lt;code&gt;app.oty-k8s.local&lt;/code&gt; and &lt;code&gt;api.oty-k8s.local&lt;/code&gt; on port 80&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NetworkPolicy&lt;/strong&gt; enforcing that only the right pods can talk to each other — verified with real tests&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why NodePort Is Not Enough
&lt;/h2&gt;

&lt;p&gt;NodePort binds a random high port (30000-32767) on every node in the cluster and forwards traffic to your service. It works, but:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Random high ports are ugly and hard to remember&lt;/li&gt;
&lt;li&gt;You're exposing every node's IP — if a node changes, external config breaks&lt;/li&gt;
&lt;li&gt;There's no hostname-based routing — you can't serve two services on port 80&lt;/li&gt;
&lt;li&gt;No TLS termination&lt;/li&gt;
&lt;li&gt;No single entry point to apply rate limiting, auth, or observability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The production pattern is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Internet / Local Network
        ↓
   LoadBalancer IP (single IP, port 80/443)
        ↓
   Ingress Controller (Nginx)
        ↓ routes by hostname
   ┌──────────────────────────────────┐
   │ app.oty-k8s.local → frontend   │
   │ api.oty-k8s.local → backend    │
   └──────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One IP. One entry point. Clean hostnames. That's what we're building.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem with LoadBalancer on Bare Metal
&lt;/h2&gt;

&lt;p&gt;On cloud Kubernetes (EKS, GKE, AKS), when you create a Service with &lt;code&gt;type: LoadBalancer&lt;/code&gt;, the cloud provider automatically provisions a real load balancer and assigns it an external IP.&lt;/p&gt;

&lt;p&gt;On bare metal with kubeadm, there's no cloud provider. Create a LoadBalancer service and you'll see this forever:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;NAME                      TYPE           EXTERNAL-IP
&lt;/span&gt;&lt;span class="gp"&gt;ingress-nginx-controller  LoadBalancer   &amp;lt;pending&amp;gt;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;&amp;lt;pending&amp;gt;&lt;/code&gt; means Kubernetes is waiting for an external controller to assign the IP. On bare metal, nothing does that by default.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MetalLB&lt;/strong&gt; solves this. It's a load balancer implementation designed specifically for bare metal clusters. It watches for &lt;code&gt;LoadBalancer&lt;/code&gt; services and assigns real IPs from a pool you define. Your network learns about these IPs via ARP (Layer 2 mode) and routes traffic to the right node automatically.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1 — Install MetalLB
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/metallb/metallb/v0.14.5/config/manifests/metallb-native.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait for MetalLB pods to be ready:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ready pod &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;metallb &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-n&lt;/span&gt; metallb-system &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;90s

kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; metallb-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;NAME                          READY   STATUS
controller-xxx                1/1     Running
speaker-xxx (on master)       1/1     Running
speaker-yyy (on worker)       1/1     Running
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;controller&lt;/code&gt; manages IP address assignments. The &lt;code&gt;speaker&lt;/code&gt; pods run on each node and announce IP ownership via ARP — when your laptop asks "who has 192.168.1.200?", the MetalLB speaker on the node holding that service responds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure the IP Address Pool
&lt;/h3&gt;

&lt;p&gt;Pick a range on your local network that's &lt;strong&gt;outside your DHCP range&lt;/strong&gt; so your router doesn't assign those IPs to other devices. Most home routers use &lt;code&gt;.100-.199&lt;/code&gt; for DHCP so &lt;code&gt;.200-.220&lt;/code&gt; is typically safe:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: local-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.1.200-192.168.1.220
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: local-advert
  namespace: metallb-system
spec:
  ipAddressPools:
  - local-pool
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;L2Advertisement&lt;/code&gt; tells MetalLB to use Layer 2 mode — simple ARP-based announcement that works on any network without router configuration. No BGP setup needed.&lt;/p&gt;

&lt;p&gt;Verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get ipaddresspool &lt;span class="nt"&gt;-n&lt;/span&gt; metallb-system
kubectl get l2advertisement &lt;span class="nt"&gt;-n&lt;/span&gt; metallb-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 2 — Install Nginx Ingress Controller
&lt;/h2&gt;

&lt;p&gt;The Ingress Controller is what actually reads your Ingress resources and configures itself to route traffic. We're using the community Nginx Ingress Controller.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.5/deploy/static/provider/cloud/deploy.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait for it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ready pod &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-l&lt;/span&gt; app.kubernetes.io/component&lt;span class="o"&gt;=&lt;/span&gt;controller &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-n&lt;/span&gt; ingress-nginx &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;90s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check if MetalLB assigned a real IP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; ingress-nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;NAME                      TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)
ingress-nginx-controller  LoadBalancer   10.96.x.x      192.168.1.200   80:3xxxx/TCP,443:3xxxx/TCP
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That &lt;code&gt;192.168.1.200&lt;/code&gt; in &lt;code&gt;EXTERNAL-IP&lt;/code&gt; is MetalLB working. That single IP is now your cluster's front door for all HTTP/HTTPS traffic.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3 — Update Services to ClusterIP
&lt;/h2&gt;

&lt;p&gt;Your Django and Next.js services are currently NodePort. With Ingress in place, external traffic flows: &lt;code&gt;client → MetalLB IP → Ingress Controller → ClusterIP Service → Pod&lt;/code&gt;. NodePort is no longer needed.&lt;/p&gt;

&lt;p&gt;Update both services to &lt;code&gt;type: ClusterIP&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; /home/oty-k8s/k8s/backend-service.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; /home/oty-k8s/k8s/frontend-service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See → &lt;a href="https://github.com/otie16/k8s-homelab-vm-project/blob/master/k8s/backend-service.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;k8s/backend-service.yaml&lt;/code&gt;&lt;/a&gt; | &lt;a href="https://github.com/otie16/k8s-homelab-vm-project/blob/master/k8s/frontend-service.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;k8s/frontend-service.yaml&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4 — Create the Ingress Resource
&lt;/h2&gt;

&lt;p&gt;An Ingress resource is a routing table — it tells the Ingress controller which hostnames map to which services.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; /home/oty-k8s/k8s/ingress.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See the full manifest → &lt;a href="https://github.com/otie16/k8s-homelab-vm-project/blob/master/k8s/ingress.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;k8s/ingress.yaml&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The manifest routes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;app.oty-k8s.local&lt;/code&gt; → &lt;code&gt;nextjs-frontend&lt;/code&gt; service on port 3000&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;api.oty-k8s.local&lt;/code&gt; → &lt;code&gt;django-backend&lt;/code&gt; service on port 8000&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Common mistake worth knowing about:&lt;/strong&gt; The &lt;code&gt;apiVersion&lt;/code&gt; must be &lt;code&gt;networking.k8s.io/v1&lt;/code&gt; — note the &lt;code&gt;s&lt;/code&gt; in &lt;code&gt;k8s&lt;/code&gt;. &lt;code&gt;networking.k8.io/v1&lt;/code&gt; (missing the s) causes &lt;code&gt;no matches for kind "Ingress" in version "networking.k8.io/v1"&lt;/code&gt;. Kubernetes error messages don't highlight the typo so this one wastes time.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Verify the Ingress has the MetalLB IP assigned:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get ingress &lt;span class="nt"&gt;-n&lt;/span&gt; k8s-vm-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;NAME                 CLASS   HOSTS                                    ADDRESS         PORT(S)
k8s-vm-app-ingress   nginx   app.oty-k8s.local,api.oty-k8s.local  192.168.1.200   80
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 5 — Configure Local DNS
&lt;/h2&gt;

&lt;p&gt;Since &lt;code&gt;oty-k8s.local&lt;/code&gt; isn't a real domain, you need to tell your machine to resolve it to the MetalLB IP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On Windows&lt;/strong&gt; (open Notepad as Administrator):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight batchfile"&gt;&lt;code&gt;&lt;span class="kd"&gt;C&lt;/span&gt;:\Windows\System32\drivers\etc\hosts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;On macOS/Linux:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;nano /etc/hosts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="m"&gt;192&lt;/span&gt;.&lt;span class="m"&gt;168&lt;/span&gt;.&lt;span class="m"&gt;1&lt;/span&gt;.&lt;span class="m"&gt;200&lt;/span&gt;   &lt;span class="n"&gt;app&lt;/span&gt;.&lt;span class="n"&gt;oty&lt;/span&gt;-&lt;span class="n"&gt;k8s&lt;/span&gt;.&lt;span class="n"&gt;local&lt;/span&gt;
&lt;span class="m"&gt;192&lt;/span&gt;.&lt;span class="m"&gt;168&lt;/span&gt;.&lt;span class="m"&gt;1&lt;/span&gt;.&lt;span class="m"&gt;200&lt;/span&gt;   &lt;span class="n"&gt;api&lt;/span&gt;.&lt;span class="n"&gt;oty&lt;/span&gt;-&lt;span class="n"&gt;k8s&lt;/span&gt;.&lt;span class="n"&gt;local&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now open your browser:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;http://app.oty-k8s.local              → Next.js task manager
http://api.oty-k8s.local/api/tasks/   → Django REST API
http://api.oty-k8s.local/health/      → Health check
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No port numbers. Clean hostnames. Traffic flows through the Ingress controller on port 80.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 6 — NetworkPolicy
&lt;/h2&gt;

&lt;p&gt;Your application is accessible via clean URLs. But at this point &lt;strong&gt;any pod in the cluster can reach any other pod&lt;/strong&gt; — including your PostgreSQL database directly. That's not acceptable.&lt;/p&gt;

&lt;p&gt;NetworkPolicy is Kubernetes' built-in firewall for pod-to-pod traffic. By default all pods communicate freely. Once you apply a NetworkPolicy to a pod, it becomes deny-all for the specified traffic direction, and only connections you explicitly allow get through.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Rules We Want
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Ingress Controller → Frontend  (port 3000) ✅
Ingress Controller → Backend   (port 8000) ✅
Frontend           → Backend   (port 8000) ✅
Backend            → Postgres  (port 5432) ✅
Anything else      → Postgres  (port 5432) ❌
Anything else      → Backend   (port 8000) ❌
Anything else      → Frontend  (port 3000) ❌
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the policies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; /home/oty-k8s/k8s/networkpolicy.yaml

kubectl get networkpolicy &lt;span class="nt"&gt;-n&lt;/span&gt; k8s-vm-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;NAME              POD-SELECTOR&lt;/span&gt;
&lt;span class="s"&gt;postgres-policy   app=postgres&lt;/span&gt;
&lt;span class="s"&gt;backend-policy    app=django-backend&lt;/span&gt;
&lt;span class="s"&gt;frontend-policy   app=nextjs-frontend&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See the full manifest → &lt;a href="https://github.com/otie16/k8s-homelab-vm-project/blob/master/k8s/networkpolicy.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;k8s/networkpolicy.yaml&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Critical YAML Detail — AND vs OR Logic
&lt;/h3&gt;

&lt;p&gt;NetworkPolicy has a subtle YAML structure that's easy to get wrong. The indentation determines whether multiple conditions are ANDed or ORed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# AND — pod must match BOTH selectors (namespace AND pod label)&lt;/span&gt;
&lt;span class="na"&gt;ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;namespaceSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;kubernetes.io/metadata.name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
    &lt;span class="na"&gt;podSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;controller&lt;/span&gt;

&lt;span class="c1"&gt;# OR — pod matches EITHER selector (namespace OR pod label)&lt;/span&gt;
&lt;span class="na"&gt;ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;namespaceSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;kubernetes.io/metadata.name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;podSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nextjs-frontend&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whether the selectors are under the same &lt;code&gt;- from:&lt;/code&gt; list item (AND) or separate &lt;code&gt;- from:&lt;/code&gt; items (OR) is determined by indentation. Getting this wrong silently blocks legitimate traffic and is the most common NetworkPolicy mistake.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 7 — Testing That NetworkPolicy Actually Works
&lt;/h2&gt;

&lt;p&gt;This is the most important step. Applying the manifests without errors doesn't mean they do what you intended. You need to verify both directions — what's blocked and what's allowed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a Test Pod
&lt;/h3&gt;

&lt;p&gt;Spin up a busybox pod with no labels matching any policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl run nettest &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;busybox &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Never &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-n&lt;/span&gt; k8s-vm-app &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;sleep &lt;/span&gt;3600
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pod represents anything that shouldn't have access — a compromised container, a misconfigured service, or an attacker who somehow got a shell inside the cluster.&lt;/p&gt;




&lt;h3&gt;
  
  
  Test 1 — Postgres BLOCKED from random pod ❌
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; k8s-vm-app nettest &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--&lt;/span&gt; nc &lt;span class="nt"&gt;-zv&lt;/span&gt; postgres 5432 &lt;span class="nt"&gt;-w&lt;/span&gt; 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;nc: postgres (10.x.x.x:5432): Connection timed out
command terminated with exit code 1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The postgres NetworkPolicy denies all ingress except from &lt;code&gt;app=django-backend&lt;/code&gt;. The nettest pod has no such label — blocked.&lt;/p&gt;




&lt;h3&gt;
  
  
  Test 2 — Postgres REACHABLE from backend pod ✅
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; k8s-vm-app &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="si"&gt;$(&lt;/span&gt;kubectl get pod &lt;span class="nt"&gt;-n&lt;/span&gt; k8s-vm-app &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;django-backend &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.items[0].metadata.name}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--&lt;/span&gt; nc &lt;span class="nt"&gt;-zv&lt;/span&gt; postgres 5432 &lt;span class="nt"&gt;-w&lt;/span&gt; 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;postgres (10.x.x.x:5432) open
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The backend pod has label &lt;code&gt;app=django-backend&lt;/code&gt; which matches the postgres policy's allow rule.&lt;/p&gt;




&lt;h3&gt;
  
  
  Test 3 — Backend BLOCKED from random pod ❌
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; k8s-vm-app nettest &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--&lt;/span&gt; nc &lt;span class="nt"&gt;-zv&lt;/span&gt; django-backend 8000 &lt;span class="nt"&gt;-w&lt;/span&gt; 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;nc: django-backend (10.x.x.x:8000): Connection timed out
command terminated with exit code 1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Test 4 — Backend REACHABLE from frontend pod ✅
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; k8s-vm-app &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="si"&gt;$(&lt;/span&gt;kubectl get pod &lt;span class="nt"&gt;-n&lt;/span&gt; k8s-vm-app &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nextjs-frontend &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.items[0].metadata.name}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--&lt;/span&gt; nc &lt;span class="nt"&gt;-zv&lt;/span&gt; django-backend 8000 &lt;span class="nt"&gt;-w&lt;/span&gt; 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;django-backend (10.x.x.x:8000) open
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The frontend pod has label &lt;code&gt;app=nextjs-frontend&lt;/code&gt; which matches the backend policy's allow rule.&lt;/p&gt;




&lt;h3&gt;
  
  
  Test 5 — End-to-end through Ingress ✅
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://api.oty-k8s.local/health/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ok"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://api.oty-k8s.local/api/tasks/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or open &lt;code&gt;http://app.oty-k8s.local&lt;/code&gt; in your browser — full task manager UI loading through the Ingress controller.&lt;/p&gt;




&lt;h3&gt;
  
  
  Clean Up
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete pod nettest &lt;span class="nt"&gt;-n&lt;/span&gt; k8s-vm-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Test Results Summary
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Test&lt;/th&gt;
&lt;th&gt;From&lt;/th&gt;
&lt;th&gt;To&lt;/th&gt;
&lt;th&gt;Port&lt;/th&gt;
&lt;th&gt;Expected&lt;/th&gt;
&lt;th&gt;What it proves&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;random pod&lt;/td&gt;
&lt;td&gt;postgres&lt;/td&gt;
&lt;td&gt;5432&lt;/td&gt;
&lt;td&gt;❌ timed out&lt;/td&gt;
&lt;td&gt;DB unreachable from untrusted sources&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;django-backend&lt;/td&gt;
&lt;td&gt;postgres&lt;/td&gt;
&lt;td&gt;5432&lt;/td&gt;
&lt;td&gt;✅ open&lt;/td&gt;
&lt;td&gt;App can reach its DB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;random pod&lt;/td&gt;
&lt;td&gt;django-backend&lt;/td&gt;
&lt;td&gt;8000&lt;/td&gt;
&lt;td&gt;❌ timed out&lt;/td&gt;
&lt;td&gt;API unreachable from untrusted sources&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;nextjs-frontend&lt;/td&gt;
&lt;td&gt;django-backend&lt;/td&gt;
&lt;td&gt;8000&lt;/td&gt;
&lt;td&gt;✅ open&lt;/td&gt;
&lt;td&gt;Frontend can call the API&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;laptop browser&lt;/td&gt;
&lt;td&gt;app.oty-k8s.local&lt;/td&gt;
&lt;td&gt;80&lt;/td&gt;
&lt;td&gt;✅ 200 OK&lt;/td&gt;
&lt;td&gt;Full stack works end to end&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;All five passing means your NetworkPolicy enforces exactly what you intended. The cluster is now network-isolated at the pod level.&lt;/p&gt;




&lt;h2&gt;
  
  
  Understanding the Full Traffic Flow
&lt;/h2&gt;

&lt;p&gt;With everything in place, here's what happens when you open &lt;code&gt;http://app.oty-k8s.local&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Browser (your laptop)
    │
    │ DNS resolves app.oty-k8s.local → 192.168.1.200
    │ HTTP GET / on port 80
    ↓
192.168.1.200 — MetalLB
(assigned to ingress-nginx LoadBalancer service,
 announced via ARP to your local network)
    │
    │ Host header: app.oty-k8s.local
    ↓
Nginx Ingress Controller Pod (ingress-nginx namespace)
    │
    │ Matches rule: app.oty-k8s.local → nextjs-frontend:3000
    │ NetworkPolicy allows ingress-nginx → nextjs-frontend
    ↓
Next.js Pod (k8s-vm-app namespace)
    │
    │ API call to django-backend:8000
    │ NetworkPolicy allows nextjs-frontend → django-backend
    ↓
Django Backend Pod
    │
    │ Query to postgres:5432
    │ NetworkPolicy allows django-backend → postgres
    ↓
PostgreSQL Pod (StatefulSet, PersistentVolume on node disk)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every hop is intentional. Every connection is explicitly allowed. Anything not on this list is blocked at the network level.&lt;/p&gt;




&lt;h2&gt;
  
  
  Common Issues and Fixes
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;no matches for kind "Ingress" in version "networking.k8.io/v1"&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
Typo — &lt;code&gt;networking.k8.io&lt;/code&gt; should be &lt;code&gt;networking.k8s.io&lt;/code&gt;. Fix:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/networking.k8.io/networking.k8s.io/g'&lt;/span&gt; ingress.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;MetalLB IP stays &lt;code&gt;&amp;lt;pending&amp;gt;&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
Check MetalLB pods are running and the IPAddressPool is configured. Also verify no other service already claimed the IP range.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; metallb-system
kubectl get ipaddresspool &lt;span class="nt"&gt;-n&lt;/span&gt; metallb-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Ingress returns 404 for all paths&lt;/strong&gt;&lt;br&gt;
The &lt;code&gt;ingressClassName: nginx&lt;/code&gt; doesn't match the installed controller class, or the controller isn't ready yet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get ingressclass
&lt;span class="c"&gt;# Should show: nginx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NetworkPolicy blocking allowed connections&lt;/strong&gt;&lt;br&gt;
Check that pod labels match exactly what the policy expects:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; k8s-vm-app &lt;span class="nt"&gt;--show-labels&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then review the AND vs OR indentation in your NetworkPolicy YAML.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;503 Service Temporarily Unavailable from Ingress&lt;/strong&gt;&lt;br&gt;
The Ingress controller can reach the service but no pods are passing their readiness probe.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get endpoints &lt;span class="nt"&gt;-n&lt;/span&gt; k8s-vm-app
&lt;span class="c"&gt;# All services should show pod IPs, not &amp;lt;none&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;You now have a production-style Kubernetes networking setup. Both services are accessible via clean hostnames through a single Ingress entry point, and NetworkPolicy enforces traffic isolation at the network level with verified tests.&lt;/p&gt;

&lt;p&gt;The remaining phases of this homelab roadmap:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 3 — Storage and Stateful Systems&lt;/strong&gt;&lt;br&gt;
PostgreSQL HA with streaming replication as a StatefulSet, backup strategies, and Longhorn for distributed storage across nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 4 — Observability and Security&lt;/strong&gt;&lt;br&gt;
Full kube-prometheus-stack deployment, custom Grafana dashboards for application metrics, RBAC, ServiceAccounts, PodSecurityAdmission, and encrypting Secrets at rest.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 5 — GitOps with ArgoCD&lt;/strong&gt;&lt;br&gt;
Helm charts for the application, ArgoCD for continuous deployment with Git as the single source of truth, and Horizontal Pod Autoscaler for automatic scaling under load.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;NodePort is for learning, not production.&lt;/strong&gt; It exposes every node's IP on random high ports with no hostname routing. The production pattern is LoadBalancer → Ingress → ClusterIP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MetalLB is essential for bare metal.&lt;/strong&gt; Without it, LoadBalancer services pend forever. With it, you get the same experience as cloud Kubernetes — services get real IPs automatically from a pool you control.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NetworkPolicy is deny-by-default once applied.&lt;/strong&gt; As soon as you add a NetworkPolicy to a pod, all traffic not explicitly allowed is blocked. This is exactly the right security posture — allowlist, not blocklist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Always test your NetworkPolicy — don't assume it works.&lt;/strong&gt; The manifest applying without errors doesn't mean it does what you intended. The AND vs OR YAML structure is subtle and easy to get wrong silently. Test both blocked and allowed paths every time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Ingress controller is just Nginx.&lt;/strong&gt; Demystify it: it's a regular nginx reverse proxy that watches the Kubernetes API and updates its config automatically. When routing breaks, you can &lt;code&gt;kubectl exec&lt;/code&gt; into the controller pod and inspect the nginx config directly.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Source code:&lt;/strong&gt; &lt;a href="https://github.com/otie16/k8s-homelab-vm-project.git" rel="noopener noreferrer"&gt;github.com/otie16/k8s-homelab-vm-project&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;← &lt;a href="https://dev.to/otobong_edoho_7796fec1f41/building-a-production-grade-kubernetes-cluster-from-scratch-part-1-cluster-setup-workloads-and-55gb"&gt;Part 1: Cluster Setup, Workloads, and a Full-Stack App&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Oty is a Lead DevOps/Cloud Engineer and DevOps mentor. Follow for more hands-on infrastructure content.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Tags:&lt;/strong&gt; &lt;code&gt;Kubernetes&lt;/code&gt; &lt;code&gt;DevOps&lt;/code&gt; &lt;code&gt;Networking&lt;/code&gt; &lt;code&gt;MetalLB&lt;/code&gt; &lt;code&gt;Nginx&lt;/code&gt; &lt;code&gt;Ingress&lt;/code&gt; &lt;code&gt;NetworkPolicy&lt;/code&gt; &lt;code&gt;Platform Engineering&lt;/code&gt; &lt;code&gt;Homelab&lt;/code&gt; &lt;code&gt;Cloud Native&lt;/code&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>networking</category>
      <category>security</category>
    </item>
    <item>
      <title>Building a Production-Grade Kubernetes Cluster from Scratch — Part 1: Cluster Setup, Workloads, and a Real App</title>
      <dc:creator>Otobong Edoho</dc:creator>
      <pubDate>Wed, 13 May 2026 10:45:57 +0000</pubDate>
      <link>https://forem.com/otobong_edoho_7796fec1f41/building-a-production-grade-kubernetes-cluster-from-scratch-part-1-cluster-setup-workloads-and-55gb</link>
      <guid>https://forem.com/otobong_edoho_7796fec1f41/building-a-production-grade-kubernetes-cluster-from-scratch-part-1-cluster-setup-workloads-and-55gb</guid>
      <description>&lt;p&gt;&lt;em&gt;Part 1 of 2 — From bare VMs to a fully running 3-service application on a self-managed Kubernetes cluster. No managed services. No shortcuts. Just raw kubeadm.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Series navigation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Part 1 (you are here):&lt;/strong&gt; Cluster setup, foundational workloads, deploying a full-stack app with ConfigMaps, Secrets, StatefulSets, and CI pipelines&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Part 2:&lt;/strong&gt; Networking deep dive — MetalLB, Nginx Ingress, clean hostnames, and NetworkPolicy enforcement&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Full source code:&lt;/strong&gt; All application code, Kubernetes manifests, and CI pipelines are available at&lt;br&gt;
&lt;strong&gt;&lt;a href="https://github.com/otie16/k8s-homelab-vm-project.git" rel="noopener noreferrer"&gt;github.com/otie16/k8s-homelab-vm-project&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;There are two kinds of Kubernetes engineers.&lt;/p&gt;

&lt;p&gt;The first kind provisions an EKS cluster, deploys a workload, and moves on. They know Kubernetes from the outside — the API, the manifests, the &lt;code&gt;kubectl&lt;/code&gt; commands.&lt;/p&gt;

&lt;p&gt;The second kind wants to know what's happening underneath. How does the scheduler actually decide where to place a pod? What does kubeadm actually do when you run &lt;code&gt;kubeadm init&lt;/code&gt;? Why does Calico need kernel modules? What breaks when your pod CIDR overlaps with your host network?&lt;/p&gt;

&lt;p&gt;This series is for the second kind.&lt;/p&gt;

&lt;p&gt;I built a two-node Kubernetes cluster from scratch on two Ubuntu VMs in my homelab, deployed a production-style full-stack application on top of it, built CI pipelines with SAST and vulnerability scanning, debugged every error the cluster threw at me, and documented every step. This is that documentation — written so you can follow along, break things, fix them, and walk away understanding &lt;em&gt;why&lt;/em&gt; Kubernetes works the way it does.&lt;/p&gt;

&lt;p&gt;By the end of Part 1, you'll have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A fully functional two-node kubeadm cluster&lt;/li&gt;
&lt;li&gt;A 3-service application running on it (Next.js + Django REST API + PostgreSQL)&lt;/li&gt;
&lt;li&gt;Proper use of ConfigMaps, Secrets, StatefulSets, Jobs, probes, and resource limits&lt;/li&gt;
&lt;li&gt;CI pipelines with SAST, dependency scanning, and Docker image vulnerability scanning&lt;/li&gt;
&lt;li&gt;A deep understanding of every concept you implemented&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Your Laptop
    │
    ├── k8s-master  (192.168.1.100) — Control Plane
    │       kube-apiserver, etcd, scheduler,
    │       controller-manager, CoreDNS, Calico
    │
    └── k8s-worker-node (192.168.1.101) — Worker
            kubelet, kube-proxy, Calico
            Your actual workloads run here
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Node specs:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Node&lt;/th&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;th&gt;IP&lt;/th&gt;
&lt;th&gt;OS&lt;/th&gt;
&lt;th&gt;Specs&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;k8s-master&lt;/td&gt;
&lt;td&gt;Control Plane&lt;/td&gt;
&lt;td&gt;192.168.1.100&lt;/td&gt;
&lt;td&gt;Ubuntu 24.04 LTS&lt;/td&gt;
&lt;td&gt;2 vCPU, 4GB RAM&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;k8s-worker-node&lt;/td&gt;
&lt;td&gt;Worker&lt;/td&gt;
&lt;td&gt;192.168.1.101&lt;/td&gt;
&lt;td&gt;Ubuntu 24.04 LTS&lt;/td&gt;
&lt;td&gt;2 vCPU, 4GB RAM&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Both VMs run on VMware. You can use VirtualBox, Hyper-V, or any hypervisor — the Kubernetes setup is identical.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 1 — Bootstrapping the Cluster
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Why kubeadm Instead of a Managed Service?
&lt;/h3&gt;

&lt;p&gt;Managed Kubernetes (EKS, GKE, AKS) hides the control plane from you. You never see the API server. You never touch etcd. You never configure a CNI plugin from scratch. That's great for production but terrible for learning.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubeadm&lt;/code&gt; is the official Kubernetes cluster bootstrapping tool. It handles the hard parts — generating certificates, writing control plane manifests, configuring etcd — while still giving you full access to everything. Running kubeadm once teaches you more about how Kubernetes actually works than months of using managed services.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1 — Set Hostnames
&lt;/h3&gt;

&lt;p&gt;On the &lt;strong&gt;control plane VM&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;hostnamectl set-hostname k8s-master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the &lt;strong&gt;worker VM&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;hostnamectl set-hostname k8s-worker-node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On &lt;strong&gt;both VMs&lt;/strong&gt;, add entries to &lt;code&gt;/etc/hosts&lt;/code&gt; so nodes can resolve each other by name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;nano /etc/hosts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add at the bottom:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="m"&gt;192&lt;/span&gt;.&lt;span class="m"&gt;168&lt;/span&gt;.&lt;span class="m"&gt;1&lt;/span&gt;.&lt;span class="m"&gt;100&lt;/span&gt;  &lt;span class="n"&gt;k8s&lt;/span&gt;-&lt;span class="n"&gt;master&lt;/span&gt;
&lt;span class="m"&gt;192&lt;/span&gt;.&lt;span class="m"&gt;168&lt;/span&gt;.&lt;span class="m"&gt;1&lt;/span&gt;.&lt;span class="m"&gt;101&lt;/span&gt;  &lt;span class="n"&gt;k8s&lt;/span&gt;-&lt;span class="n"&gt;worker&lt;/span&gt;-&lt;span class="n"&gt;node&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2 — Disable Swap (Both VMs)
&lt;/h3&gt;

&lt;p&gt;Kubernetes requires swap to be off. The kubelet enforces this because swap causes unpredictable memory behaviour that breaks scheduling guarantees — if a container exceeds its memory limit, it should be OOMKilled immediately, not start swapping to disk and silently degrading.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Disable immediately&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;swapoff &lt;span class="nt"&gt;-a&lt;/span&gt;

&lt;span class="c"&gt;# Disable permanently across reboots&lt;/span&gt;
&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'/ swap / s/^\(.*\)$/#\1/g'&lt;/span&gt; /etc/fstab
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;free &lt;span class="nt"&gt;-h&lt;/span&gt;
&lt;span class="c"&gt;# Swap row should show 0B&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3 — Load Kernel Modules (Both VMs)
&lt;/h3&gt;

&lt;p&gt;Kubernetes networking needs two kernel modules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;overlay&lt;/code&gt; — handles the layered filesystem that containers use&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;br_netfilter&lt;/code&gt; — allows iptables to see traffic crossing network bridges (required for pod-to-pod networking)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;modprobe overlay
&lt;span class="nb"&gt;sudo &lt;/span&gt;modprobe br_netfilter

&lt;span class="c"&gt;# Make them persist across reboots&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4 — Set Kernel Networking Parameters (Both VMs)
&lt;/h3&gt;

&lt;p&gt;These sysctl settings tell the kernel to let iptables process bridged traffic and to forward IPv4 packets — both required for Kubernetes networking:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="c"&gt;# Apply without rebooting&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;sysctl &lt;span class="nt"&gt;--system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5 — Install containerd (Both VMs)
&lt;/h3&gt;

&lt;p&gt;Kubernetes needs a container runtime that implements the CRI (Container Runtime Interface). &lt;code&gt;containerd&lt;/code&gt; is the standard choice — it's what Docker uses under the hood.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; ca-certificates curl gnupg lsb-release

&lt;span class="c"&gt;# Add Docker's GPG key (containerd ships in Docker's repo)&lt;/span&gt;
&lt;span class="nb"&gt;sudo install&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; 0755 &lt;span class="nt"&gt;-d&lt;/span&gt; /etc/apt/keyrings
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://download.docker.com/linux/ubuntu/gpg | &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/docker.gpg
&lt;span class="nb"&gt;sudo chmod &lt;/span&gt;a+r /etc/apt/keyrings/docker.gpg

&lt;span class="c"&gt;# Add the repository&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"deb [arch=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;dpkg &lt;span class="nt"&gt;--print-architecture&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; signed-by=/etc/apt/keyrings/docker.gpg] &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  https://download.docker.com/linux/ubuntu &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;lsb_release &lt;span class="nt"&gt;-cs&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; stable"&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/docker.list &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null

&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; containerd.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Critical — configure containerd to use the systemd cgroup driver:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Both containerd and kubelet must agree on the cgroup driver. On modern Ubuntu that's &lt;code&gt;systemd&lt;/code&gt;. Mismatching them causes kubelet crashes that look completely unrelated to cgroups.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/containerd
containerd config default | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/containerd/config.toml

&lt;span class="c"&gt;# Set SystemdCgroup = true&lt;/span&gt;
&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/SystemdCgroup \= false/SystemdCgroup \= true/g'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  /etc/containerd/config.toml

&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart containerd
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;containerd
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status containerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 6 — Install kubeadm, kubelet, kubectl (Both VMs)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; apt-transport-https ca-certificates curl gpg

&lt;span class="c"&gt;# Add Kubernetes apt repo — write as a single line, not multi-line&lt;/span&gt;
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/kubernetes-apt-keyring.gpg

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /"&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/kubernetes.list

&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; kubelet kubeadm kubectl

&lt;span class="c"&gt;# Pin versions — prevents accidental upgrades that break version skew rules&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-mark hold kubelet kubeadm kubectl
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;kubelet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; Write the Kubernetes apt repo entry as a single unbroken line. Multi-line &lt;code&gt;echo&lt;/code&gt; commands with backslashes cause malformed entries in the &lt;code&gt;.list&lt;/code&gt; file that break &lt;code&gt;apt-get update&lt;/code&gt; with &lt;code&gt;E: Malformed entry 1&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Part 2 — Initialising the Control Plane
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 7 — kubeadm init (Master Only)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm init &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--pod-network-cidr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.244.0.0/16 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--apiserver-advertise-address&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;192.168.1.100
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why &lt;code&gt;10.244.0.0/16&lt;/code&gt; and not &lt;code&gt;192.168.0.0/16&lt;/code&gt;?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is a mistake I made the first time. My VMs are on &lt;code&gt;192.168.1.x&lt;/code&gt;. If I used &lt;code&gt;192.168.0.0/16&lt;/code&gt; as the pod CIDR, it would overlap with the host network. Calico would get confused about which interface belongs to the pod network and which belongs to the host, and every pod would fail to start with &lt;code&gt;stat /var/lib/calico/nodename: no such file or directory&lt;/code&gt;. Always choose a pod CIDR that doesn't overlap with your host network.&lt;/p&gt;

&lt;p&gt;What &lt;code&gt;kubeadm init&lt;/code&gt; does behind the scenes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Generates all TLS certificates for cluster components&lt;/li&gt;
&lt;li&gt;Writes static pod manifests for the API server, etcd, scheduler, and controller manager&lt;/li&gt;
&lt;li&gt;Starts the control plane components&lt;/li&gt;
&lt;li&gt;Installs CoreDNS&lt;/li&gt;
&lt;li&gt;Outputs a &lt;code&gt;kubeadm join&lt;/code&gt; command — &lt;strong&gt;copy this immediately&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 8 — Configure kubectl
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube
&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; /etc/kubernetes/admin.conf &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes
&lt;span class="c"&gt;# k8s-master should show NotReady — normal, CNI isn't installed yet&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 9 — Install Calico CNI
&lt;/h3&gt;

&lt;p&gt;Without a CNI plugin, pods can't communicate. The &lt;code&gt;NotReady&lt;/code&gt; status is because of this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Download the manifest so we can edit it&lt;/span&gt;
curl &lt;span class="nt"&gt;-O&lt;/span&gt; https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml

&lt;span class="c"&gt;# Set the correct pod CIDR to match what we used in kubeadm init&lt;/span&gt;
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s|# - name: CALICO_IPV4POOL_CIDR|- name: CALICO_IPV4POOL_CIDR|'&lt;/span&gt; calico.yaml
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s|#   value: "192.168.0.0/16"|  value: "10.244.0.0/16"|'&lt;/span&gt; calico.yaml

&lt;span class="c"&gt;# Verify the change&lt;/span&gt;
&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-A1&lt;/span&gt; &lt;span class="s2"&gt;"CALICO_IPV4POOL_CIDR"&lt;/span&gt; calico.yaml

kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; calico.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Watch Calico come up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;watch kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;calico-node&lt;/code&gt; pod goes through &lt;code&gt;Init:0/3&lt;/code&gt; → &lt;code&gt;Init:1/3&lt;/code&gt; → &lt;code&gt;Init:2/3&lt;/code&gt; → &lt;code&gt;Running&lt;/code&gt;. The init containers pull ~250MB of images so this takes a few minutes. Once &lt;code&gt;calico-node&lt;/code&gt; hits &lt;code&gt;Running&lt;/code&gt;, the master goes &lt;code&gt;Ready&lt;/code&gt; within 60 seconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 10 — Join the Worker Node
&lt;/h3&gt;

&lt;p&gt;Generate a fresh join command on the master:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubeadm token create &lt;span class="nt"&gt;--print-join-command&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the output on the &lt;strong&gt;worker node&lt;/strong&gt; with &lt;code&gt;sudo&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm &lt;span class="nb"&gt;join &lt;/span&gt;192.168.1.100:6443 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--token&lt;/span&gt; &amp;lt;token&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:&amp;lt;&lt;span class="nb"&gt;hash&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Watch the worker appear:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;watch kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME              STATUS   ROLES           AGE   VERSION
k8s-master        Ready    control-plane   10m   v1.28.15
k8s-worker-node   Ready    &amp;lt;none&amp;gt;          2m    v1.29.15
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 11 — Install the Storage Provisioner
&lt;/h3&gt;

&lt;p&gt;On bare metal, there's no cloud provider to fulfill PersistentVolumeClaim requests automatically. Without a storage provisioner, any pod that requests a PVC will be stuck with &lt;code&gt;FailedScheduling: pod has unbound immediate PersistentVolumeClaims&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Install Rancher's local-path provisioner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.26/deploy/local-path-storage.yaml

&lt;span class="c"&gt;# Set it as the default StorageClass&lt;/span&gt;
kubectl patch storageclass local-path &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;'{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'&lt;/span&gt;

&lt;span class="c"&gt;# Verify&lt;/span&gt;
kubectl get storageclass
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                   PROVISIONER             AGE
local-path (default)   rancher.io/local-path   1m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From this point every PVC in the cluster gets fulfilled automatically. No manual PV creation ever again.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 3 — The Application
&lt;/h2&gt;

&lt;p&gt;We're deploying a real 3-service task manager:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Next.js frontend&lt;/strong&gt; — React UI for creating, completing, and deleting tasks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Django REST API&lt;/strong&gt; — CRUD endpoints backed by PostgreSQL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt; — StatefulSet with persistent storage&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;All application source code is on GitHub:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;&lt;a href="https://github.com/otie16/k8s-homelab-vm-project.git" rel="noopener noreferrer"&gt;github.com/otie16/k8s-homelab-vm-project&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The repo contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;backend/&lt;/code&gt; — Django REST API (models, serializers, views, urls, wsgi, settings)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;frontend/&lt;/code&gt; — Next.js task manager UI with App Router&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;k8s/&lt;/code&gt; — All Kubernetes manifests&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.github/workflows/&lt;/code&gt; — CI pipelines for both services&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Project Structure
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;k8s-homelab-vm-project/
├── backend/
│   ├── Dockerfile
│   ├── requirements.txt
│   ├── manage.py
│   ├── core/
│   │   ├── settings.py       # DB config from env, INSTALLED_APPS, TEMPLATES
│   │   ├── urls.py           # health/, ready/, api/ endpoints
│   │   └── wsgi.py           # get_wsgi_application() with django.setup()
│   └── tasks/
│       ├── apps.py           # TasksConfig AppConfig
│       ├── models.py         # Task model
│       ├── serializers.py
│       ├── views.py          # ModelViewSet
│       └── migrations/
│           └── 0001_initial.py
├── frontend/
│   ├── Dockerfile            # Multi-stage with Next.js standalone output
│   ├── next.config.js        # output: 'standalone'
│   └── app/
│       ├── layout.js         # Required root layout for App Router
│       └── page.js           # Task manager UI
├── k8s/
│   ├── namespace.yaml
│   ├── configmap.yaml
│   ├── secret.yaml
│   ├── postgres-statefulset.yaml
│   ├── postgres-service.yaml
│   ├── migrate-job.yaml
│   ├── backend-deployment.yaml
│   ├── backend-service.yaml
│   ├── frontend-deployment.yaml
│   ├── frontend-service.yaml
│   └── deploy.sh
└── .github/
    └── workflows/
        ├── backend-ci.yml
        └── frontend-ci.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Key Implementation Decisions Worth Understanding
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;wsgi.py — the bug that cost me the most time&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The single most frustrating error in this entire project was &lt;code&gt;AppRegistryNotReady: Apps aren't loaded yet&lt;/code&gt;. The cause: Django's WSGI handler instantiated at module import time before &lt;code&gt;django.setup()&lt;/code&gt; runs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Wrong — WSGIHandler() called at import time, app registry not ready
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;django.core.handlers.wsgi&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;WSGIHandler&lt;/span&gt;
&lt;span class="n"&gt;application&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;WSGIHandler&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Correct — handles initialisation order properly
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;django&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;django.core.wsgi&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;get_wsgi_application&lt;/span&gt;
&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setdefault&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;DJANGO_SETTINGS_MODULE&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;core.settings&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;django&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setup&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;application&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;get_wsgi_application&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This looks like a trivial difference but it determines whether Django's app registry is populated before URL patterns are loaded. The &lt;code&gt;get_wsgi_application()&lt;/code&gt; function is the correct public API for exactly this reason. Without it, every request returns 500 and the pod enters CrashLoopBackOff.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next.js standalone output — why it matters for image size&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Without standalone mode, copying &lt;code&gt;node_modules&lt;/code&gt; into the final Docker image produces ~800MB. With standalone mode enabled in &lt;code&gt;next.config.js&lt;/code&gt;, Next.js traces exactly which files the production server needs. The final image runs &lt;code&gt;node server.js&lt;/code&gt; directly — no npm, no Next.js CLI, no node_modules at runtime. Result: ~160MB instead of ~800MB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The init container pattern — dependency ordering without hacks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Both the migration job and the backend deployment use an init container to wait for PostgreSQL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;initContainers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;wait-for-postgres&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
  &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sh'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-c'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
    &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;until&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;nc&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;postgres&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;5432;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;do&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;waiting;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;2;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;done'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This blocks the main container from starting until port 5432 responds. No sleep hacks, no retry logic in application code, no race conditions. The main container never starts until the dependency is ready.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Liveness vs Readiness probes — they're not the same thing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Both probes hit different endpoints for a reason:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/health/&lt;/code&gt; → &lt;strong&gt;liveness&lt;/strong&gt;: "Is this container alive?" Failure triggers a container restart.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/ready/&lt;/code&gt; → &lt;strong&gt;readiness&lt;/strong&gt;: "Is this container ready for traffic?" Failure removes the pod from the Service endpoint list &lt;em&gt;without restarting it&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your Django app might be alive but still warming up. Readiness handles that gracefully — the pod stays up but doesn't receive traffic until it signals it's ready.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 4 — Kubernetes Manifests Deep Dive
&lt;/h2&gt;

&lt;p&gt;The full manifests are in &lt;a href="https://github.com/otie16/k8s-homelab-vm-project/tree/master/k8s" rel="noopener noreferrer"&gt;&lt;code&gt;k8s/&lt;/code&gt;&lt;/a&gt; in the GitHub repo. Here's what each one does and the important decisions behind them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Namespace
&lt;/h3&gt;

&lt;p&gt;Everything lives in &lt;code&gt;k8s-vm-app&lt;/code&gt;. Namespaces isolate resources, apply RBAC boundaries, and scope NetworkPolicy.&lt;/p&gt;

&lt;h3&gt;
  
  
  ConfigMap and Secret — The Right Separation
&lt;/h3&gt;

&lt;p&gt;This is one of the most important patterns to get right in Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ConfigMap&lt;/strong&gt; — anything you'd commit to a public git repo. Hostnames, ports, feature flags, non-sensitive config. In our case: &lt;code&gt;DB_HOST&lt;/code&gt;, &lt;code&gt;DB_PORT&lt;/code&gt;, &lt;code&gt;DB_NAME&lt;/code&gt;, &lt;code&gt;DEBUG&lt;/code&gt;, &lt;code&gt;ALLOWED_HOSTS&lt;/code&gt;, &lt;code&gt;NEXT_PUBLIC_API_URL&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secret&lt;/strong&gt; — anything you'd never commit. Passwords, API keys, tokens. In our case: &lt;code&gt;DB_USER&lt;/code&gt;, &lt;code&gt;DB_PASSWORD&lt;/code&gt;, &lt;code&gt;POSTGRES_PASSWORD&lt;/code&gt;, &lt;code&gt;DJANGO_SECRET_KEY&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Both are consumed via &lt;code&gt;envFrom&lt;/code&gt; in the pod spec — containers get all keys as environment variables automatically without any hardcoded credentials touching the manifest files.&lt;/p&gt;

&lt;p&gt;See → &lt;a href="https://github.com/otie16/k8s-homelab-vm-project/blob/master/k8s/configmap.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;k8s/configmap.yaml&lt;/code&gt;&lt;/a&gt; | &lt;a href="https://github.com/otie16/k8s-homelab-vm-project/blob/master/k8s/secret.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;k8s/secret.yaml&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL — Why StatefulSet and Not Deployment
&lt;/h3&gt;

&lt;p&gt;This is the most important architectural decision for databases on Kubernetes.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;Deployment&lt;/strong&gt; treats pods as interchangeable. Any pod can replace any other. No stable identity, no guaranteed ordering.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;StatefulSet&lt;/strong&gt; gives pods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Stable identity&lt;/strong&gt; — pods are named &lt;code&gt;postgres-0&lt;/code&gt;, &lt;code&gt;postgres-1&lt;/code&gt;, not random hashes. &lt;code&gt;postgres-0.postgres.k8s-vm-app.svc.cluster.local&lt;/code&gt; is always that specific pod.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ordered startup/shutdown&lt;/strong&gt; — pods start in order and terminate in reverse. Critical for primary/replica database setups.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Per-pod PVCs&lt;/strong&gt; via &lt;code&gt;volumeClaimTemplates&lt;/code&gt; — each replica gets its own PersistentVolumeClaim that follows it even if rescheduled to a different node.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The headless service (&lt;code&gt;clusterIP: None&lt;/code&gt;) is required for StatefulSets — it allows DNS to resolve directly to individual pod IPs rather than a virtual cluster IP.&lt;/p&gt;

&lt;p&gt;See → &lt;a href="https://github.com/otie16/k8s-homelab-vm-project/blob/master/k8s/postgres-statefulset.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;k8s/postgres-statefulset.yaml&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Migration Job
&lt;/h3&gt;

&lt;p&gt;Database migrations run once, must complete before the application starts, and should retry on failure. A Kubernetes Job is the exact right primitive for this.&lt;/p&gt;

&lt;p&gt;The manifest uses an init container that blocks until &lt;code&gt;postgres:5432&lt;/code&gt; responds, then runs &lt;code&gt;python manage.py migrate --noinput&lt;/code&gt;. &lt;code&gt;backoffLimit: 3&lt;/code&gt; means Kubernetes retries up to 3 times on failure.&lt;/p&gt;

&lt;p&gt;See → &lt;a href="https://github.com/otie16/k8s-homelab-vm-project/blob/master/k8s/migrate-job.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;k8s/migrate-job.yaml&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Backend and Frontend Deployments
&lt;/h3&gt;

&lt;p&gt;Both deployments use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;RollingUpdate&lt;/code&gt; with &lt;code&gt;maxUnavailable: 0&lt;/code&gt; — zero downtime deploys, new pods must be ready before old ones are removed&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;imagePullPolicy: Always&lt;/code&gt; — ensures every rollout pulls the latest image from Docker Hub even if the tag hasn't changed&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;envFrom&lt;/code&gt; consuming both ConfigMap and Secret&lt;/li&gt;
&lt;li&gt;Liveness and readiness probes&lt;/li&gt;
&lt;li&gt;Resource requests and limits to prevent any single pod from starving others on the node&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;See → &lt;a href="https://github.com/otie16/k8s-homelab-vm-project/blob/master/k8s/backend-deployment.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;k8s/backend-deployment.yaml&lt;/code&gt;&lt;/a&gt; | &lt;a href="https://github.com/otie16/k8s-homelab-vm-project/blob/master/k8s/frontend-deployment.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;k8s/frontend-deployment.yaml&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Deploy Script
&lt;/h3&gt;

&lt;p&gt;Never apply manifests one by one manually. The deploy script handles ordering and waiting automatically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x /home/oty-k8s/k8s/deploy.sh
/home/oty-k8s/k8s/deploy.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It applies resources in the correct dependency order, waits for PostgreSQL readiness before migrations, waits for the migration job to complete before the application starts, and stops immediately on any failure (&lt;code&gt;set -e&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;See → &lt;a href="https://github.com/otie16/k8s-homelab-vm-project/blob/master/k8s/deploy.sh" rel="noopener noreferrer"&gt;&lt;code&gt;k8s/deploy.sh&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 5 — CI Pipelines with Real Security Gates
&lt;/h2&gt;

&lt;p&gt;Every image passes through a security pipeline before reaching Docker Hub. The pipeline architecture:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Push to main
    ↓
Lint + unit tests (flake8 / eslint)
    ↓
SAST: Bandit + pip-audit + Trivy filesystem scan
    ↓
Docker build + Trivy image scan (CRITICAL = fail hard)
    ↓
Push to Docker Hub (only on main, only if all gates pass)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  GitHub Secrets Required
&lt;/h3&gt;

&lt;p&gt;Go to your repo → Settings → Secrets and variables → Actions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;&lt;span class="err"&gt;DOCKERHUB_USERNAME&lt;/span&gt;    &lt;span class="err"&gt;your&lt;/span&gt; &lt;span class="err"&gt;Docker&lt;/span&gt; &lt;span class="err"&gt;Hub&lt;/span&gt; &lt;span class="err"&gt;username&lt;/span&gt;
&lt;span class="err"&gt;DOCKERHUB_TOKEN&lt;/span&gt;       &lt;span class="err"&gt;Docker&lt;/span&gt; &lt;span class="err"&gt;Hub&lt;/span&gt; &lt;span class="err"&gt;access&lt;/span&gt; &lt;span class="err"&gt;token&lt;/span&gt; &lt;span class="err"&gt;(not&lt;/span&gt; &lt;span class="err"&gt;your&lt;/span&gt; &lt;span class="err"&gt;password)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What Each Security Tool Does
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Bandit&lt;/strong&gt; scans Python source code for security anti-patterns — hardcoded passwords, &lt;code&gt;subprocess&lt;/code&gt; with &lt;code&gt;shell=True&lt;/code&gt;, SQL string formatting, weak cryptography. Reads your code the way a security reviewer would, without executing it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;pip-audit&lt;/strong&gt; cross-references every package in &lt;code&gt;requirements.txt&lt;/code&gt; against the Python Packaging Advisory Database for known CVEs. If your Django version has a known vulnerability, it fails before the image is built.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trivy filesystem scan&lt;/strong&gt; runs against the source directory before the Docker build. Catches secrets accidentally committed, misconfigured files, and dependency vulnerabilities through a different database than pip-audit — the overlap is intentional.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trivy image scan&lt;/strong&gt; runs against the final built image layers. This is the deepest scan — it catches OS-level vulnerabilities that no source-level tool would see. A vulnerable &lt;code&gt;libssl&lt;/code&gt; in the Alpine base image, for example. CRITICAL severity fails the pipeline hard. HIGH severity generates a report but doesn't block.&lt;/p&gt;

&lt;p&gt;The key gate: &lt;strong&gt;nothing reaches Docker Hub unless lint, SAST, and image scanning all pass, and only on pushes to main.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;See the full pipeline files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/otie16/k8s-homelab-vm-project/blob/master/.github/workflows/backend-ci.yml" rel="noopener noreferrer"&gt;&lt;code&gt;backend-ci.yml&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/otie16/k8s-homelab-vm-project/blob/master/.github/workflows/frontend-ci.yml" rel="noopener noreferrer"&gt;&lt;code&gt;frontend-ci.yml&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Part 6 — Deploying to the Cluster
&lt;/h2&gt;

&lt;p&gt;Clone the repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/otie16/k8s-homelab-vm-project.git
&lt;span class="nb"&gt;cd &lt;/span&gt;k8s-homelab-vm-project
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the image names in the deployment manifests to your Docker Hub username, then build and push both images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Backend&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;backend
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; YOUR_USERNAME/k8s-vm-app-backend:latest &lt;span class="nb"&gt;.&lt;/span&gt;
docker push YOUR_USERNAME/k8s-vm-app-backend:latest

&lt;span class="c"&gt;# Frontend&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; ../frontend
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; YOUR_USERNAME/k8s-vm-app-frontend:latest &lt;span class="nb"&gt;.&lt;/span&gt;
docker push YOUR_USERNAME/k8s-vm-app-frontend:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy manifests to the master node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;scp &lt;span class="nt"&gt;-r&lt;/span&gt; k8s/ oty-k8s@192.168.1.100:/home/oty-k8s/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;SSH to the master and run the deploy script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh oty-k8s@192.168.1.100
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x /home/oty-k8s/k8s/deploy.sh
/home/oty-k8s/k8s/deploy.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Watch everything come up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get all &lt;span class="nt"&gt;-n&lt;/span&gt; k8s-vm-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected final state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                  READY   STATUS
pod/django-backend-xxx                1/1     Running
pod/django-backend-yyy                1/1     Running
pod/nextjs-frontend-xxx               1/1     Running
pod/nextjs-frontend-yyy               1/1     Running
pod/postgres-0                        1/1     Running
pod/django-migrate-job-xxx            0/1     Completed

NAME                      TYPE        PORT(S)
service/django-backend    NodePort    8000:30000/TCP
service/nextjs-frontend   NodePort    3000:30001/TCP
service/postgres          ClusterIP   None
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Access the app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Frontend:    http://192.168.1.100:30001
Backend API: http://192.168.1.100:30000/api/tasks/
Health:      http://192.168.1.100:30000/health/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The Errors That Cost Me The Most Time
&lt;/h2&gt;

&lt;p&gt;No honest Kubernetes writeup skips the debugging. Here are the ones worth knowing about:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;stat /var/lib/calico/nodename: no such file or directory&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
Pod CIDR overlapping with the host network. Calico can't figure out which interface is for pods vs the host. Fix: use a CIDR that doesn't overlap — &lt;code&gt;10.244.0.0/16&lt;/code&gt; when hosts are on &lt;code&gt;192.168.x.x&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;AppRegistryNotReady: Apps aren't loaded yet&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
Django's WSGI handler instantiated at module import time before &lt;code&gt;django.setup()&lt;/code&gt; runs. Fix: use &lt;code&gt;get_wsgi_application()&lt;/code&gt; with explicit &lt;code&gt;django.setup()&lt;/code&gt;. One line of difference, hours of debugging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;E: Malformed entry 1 in list file /etc/apt/sources.list.d/kubernetes.list&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
Multi-line &lt;code&gt;echo&lt;/code&gt; commands with backslashes write literal newlines into the apt sources file. Fix: always write the &lt;code&gt;deb&lt;/code&gt; entry as a single unbroken line.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;pod has unbound immediate PersistentVolumeClaims&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
No storage provisioner on bare metal. Fix: install &lt;code&gt;local-path-provisioner&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;secret "app-secret" not found&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
Secret name mismatch — created as &lt;code&gt;app-secrets&lt;/code&gt; (with an s) but manifests referenced &lt;code&gt;app-secret&lt;/code&gt;. Fix: audit all references with &lt;code&gt;grep -r "secretRef" k8s/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Calico token expiry — &lt;code&gt;Unauthorized&lt;/code&gt; on pod sandbox creation&lt;/strong&gt;&lt;br&gt;
Calico's CNI kubeconfig uses a projected ServiceAccount token with a 24-hour TTL. When it expires, new pod sandboxes fail. Workaround: delete the calico-node pod on the affected node — the daemonset recreates it with a fresh token.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Refresh the Calico token on the worker&lt;/span&gt;
kubectl delete pod &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="si"&gt;$(&lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nt"&gt;-o&lt;/span&gt; wide | &lt;span class="nb"&gt;grep &lt;/span&gt;calico-node | &lt;span class="nb"&gt;grep &lt;/span&gt;worker | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $1}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;You now have a production-style cluster running a real application with proper security patterns. But two things aren't production-ready yet:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Services exposed on ugly NodePort high ports (&lt;code&gt;30000&lt;/code&gt;, &lt;code&gt;30001&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;No network-level isolation between pods&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Part 2&lt;/strong&gt; fixes both — MetalLB for real LoadBalancer IPs, Nginx Ingress for clean hostname routing on port 80, and NetworkPolicy with real tests to verify traffic isolation works.&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;&lt;a href="https://dev.to/otobong_edoho_7796fec1f41/kubernetes-networking-deep-dive-part-2-metallb-nginx-ingress-and-networkpolicy-1lfo"&gt;Continue to Part 2: MetalLB, Nginx Ingress, and NetworkPolicy&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;kubeadm is the best learning tool for Kubernetes.&lt;/strong&gt; Managed services hide the control plane. kubeadm forces you to understand certificates, etcd, CNI plugins, and component communication at a level that makes you significantly better at operating any Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bare metal is harder and more educational.&lt;/strong&gt; No cloud LoadBalancer. No storage provisioner. No managed node groups. Every abstraction you take for granted in EKS has to be built manually — and every time you build it manually, you understand it better.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The debugging process is the education.&lt;/strong&gt; Every error in this post was a lesson. The &lt;code&gt;AppRegistryNotReady&lt;/code&gt; error taught me how Django's WSGI initialisation works at a depth I never would have reached following a happy-path tutorial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Version skew matters.&lt;/strong&gt; My master runs v1.28.15 and my worker joined at v1.29.15 — one minor version difference. Kubernetes tolerates this, but in production you manage it carefully.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Source code:&lt;/strong&gt; &lt;a href="https://github.com/otie16/k8s-homelab-vm-project.git" rel="noopener noreferrer"&gt;github.com/otie16/k8s-homelab-vm-project&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Follow for Part 2 — MetalLB, Nginx Ingress, and NetworkPolicy.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tags:&lt;/strong&gt; &lt;code&gt;Kubernetes&lt;/code&gt; &lt;code&gt;DevOps&lt;/code&gt; &lt;code&gt;Platform Engineering&lt;/code&gt; &lt;code&gt;kubeadm&lt;/code&gt; &lt;code&gt;Homelab&lt;/code&gt; &lt;code&gt;Cloud Native&lt;/code&gt; &lt;code&gt;Docker&lt;/code&gt; &lt;code&gt;Django&lt;/code&gt; &lt;code&gt;NextJS&lt;/code&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>infrastructure</category>
      <category>kubernetes</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Using Dockerfile and Docker Compose For Local Development with Node.js, MongoDB and MongoExpress</title>
      <dc:creator>Otobong Edoho</dc:creator>
      <pubDate>Tue, 15 Oct 2024 11:03:43 +0000</pubDate>
      <link>https://forem.com/otobong_edoho_7796fec1f41/using-dockerfile-and-docker-compose-for-local-development-with-nodejs-mongodb-and-mongoexpress-2ajg</link>
      <guid>https://forem.com/otobong_edoho_7796fec1f41/using-dockerfile-and-docker-compose-for-local-development-with-nodejs-mongodb-and-mongoexpress-2ajg</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this post, I'll show you how to set up a local development environment using Docker with Node.js, MongoDB, and MongoExpress. Docker is a powerful tool that makes it easy to package applications and their dependencies, ensuring consistency across different environments.&lt;/p&gt;

&lt;p&gt;The goal of this guide is to help you spin up a simple Node.js app connected to a MongoDB database. We'll also use MongoExpress as a lightweight web-based interface to manage the database, all running inside Docker containers. By the end of this post, you’ll have a fully functional environment that can be set up and torn down with just a few commands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
Before we dive in, please make sure you have the following installed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker: You can download and install Docker from here.&lt;/li&gt;
&lt;li&gt;Basic understanding of Node.js and MongoDB.&lt;/li&gt;
&lt;li&gt;An EC2 Cloud instance for Non-ubuntu users&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re new to Docker, there's no need to worry! This guide will walk you through the essential commands you need to know to get your environment up and running.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setting up the Project
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Setup for Docker&lt;/strong&gt;&lt;br&gt;
So we will start setting up the project, The first thing we need to do is pull the mongodb images and mongo express UI image from dockerhub.&lt;/p&gt;

&lt;p&gt;Let's install docker but first we need to update the package lists&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt upgrade &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install Required Dependencies&lt;br&gt;
Install packages that allow apt to use repositories over HTTPS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; apt-transport-https ca-certificates curl software-properties-common
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add Docker’s Official GPG Key&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://download.docker.com/linux/ubuntu/gpg | &lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /usr/share/keyrings/docker-archive-keyring.gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set Up the Docker Repository&lt;br&gt;
Add the Docker repository to apt sources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"deb [arch=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;dpkg &lt;span class="nt"&gt;--print-architecture&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;lsb_release &lt;span class="nt"&gt;-cs&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; stable"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/docker.list &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install Docker Engine&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that docker is installed its time to pull the images&lt;br&gt;
Let's pull the mongodb image first&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpbt0fhreprpedi3lyvm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpbt0fhreprpedi3lyvm.png" alt="mongodb image" width="800" height="139"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;The docker run command does two things, it pulls the image and runs the image which is what we call a container. A container is the running instance of an image.&lt;/p&gt;

&lt;p&gt;Docker containers run in an isolated network meaning if we are running two different containers and we want them to be able to communicate we must put them in the same network.&lt;/p&gt;
&lt;h3&gt;
  
  
  Explanation of the Screenshot:
&lt;/h3&gt;
&lt;h4&gt;
  
  
  &lt;code&gt;docker run&lt;/code&gt;:
&lt;/h4&gt;

&lt;p&gt;This command starts a new container from a Docker image.&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;code&gt;-d&lt;/code&gt;:
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;-d&lt;/code&gt; flag tells Docker to run the container in "detached" mode (in the background), so it won't block the terminal.&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;code&gt;-p 27017:27017&lt;/code&gt;:
&lt;/h4&gt;

&lt;p&gt;This option maps port &lt;code&gt;27017&lt;/code&gt; on the host machine to port &lt;code&gt;27017&lt;/code&gt; in the container. MongoDB uses this port for communication.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The syntax is &lt;code&gt;host_port:container_port&lt;/code&gt;, which means MongoDB will be accessible via &lt;code&gt;localhost:27017&lt;/code&gt; on the host machine.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  &lt;code&gt;--network mongo-network&lt;/code&gt;:
&lt;/h4&gt;

&lt;p&gt;This option connects the container to a Docker network named &lt;code&gt;mongo-network&lt;/code&gt;. The network allows multiple containers to communicate with each other. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the network doesn't exist, create it with &lt;code&gt;docker network create mongo-network&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  &lt;code&gt;--name mongodb&lt;/code&gt;:
&lt;/h4&gt;

&lt;p&gt;This assigns a name (&lt;code&gt;mongodb&lt;/code&gt;) to the running container. It allows you to refer to the container by name rather than by its container ID.&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;code&gt;-e MONGO_INITDB_ROOT_USERNAME=admin&lt;/code&gt;:
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;-e&lt;/code&gt; flag sets environment variables inside the container. In this case, it sets &lt;code&gt;MONGO_INITDB_ROOT_USERNAME&lt;/code&gt; to &lt;code&gt;admin&lt;/code&gt;, which specifies the MongoDB root user's username.&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;code&gt;-e MONGO_INITDB_ROOT_PASSWORD=changethis123&lt;/code&gt;:
&lt;/h4&gt;

&lt;p&gt;Similar to the previous option, this sets the environment variable &lt;code&gt;MONGO_INITDB_ROOT_PASSWORD&lt;/code&gt; to &lt;code&gt;changethis123&lt;/code&gt;, defining the password for the MongoDB root user.&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;code&gt;mongo&lt;/code&gt;:
&lt;/h4&gt;

&lt;p&gt;This is the name of the image to use. In this case, it is the official MongoDB image from Docker Hub.&lt;/p&gt;

&lt;p&gt;Now that we have some knowledge about docker let's proceed to pull the mongodb express image and run it&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hyk9zg5mzg0gcrhfhob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hyk9zg5mzg0gcrhfhob.png" alt="mongo express image" width="800" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To check the network available, we can run &lt;br&gt;
&lt;code&gt;docker network ls&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It will output the name of the network that we just created, so they can talk to each other using just the container name.&lt;/p&gt;

&lt;p&gt;So we can access the mongo express server from our browser using &lt;code&gt;localhost:8081&lt;/code&gt;&lt;/p&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgm423x2r4hj9ulzdp7cg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgm423x2r4hj9ulzdp7cg.png" alt="mongoexpress" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Now there's a quicker and better setup, that is using dockerfile and docker compose rather than just running the commands in the terminal. Just to be clear a Docker Compose file uses YAML syntax, it defines how to configure and run multi-container applications on Docker while a Dockerfile is a text file that contains instructions for building a container image.&lt;/p&gt;
&lt;h3&gt;
  
  
  Setting up the Project
&lt;/h3&gt;

&lt;p&gt;First, let's create a simple Node.js application. If you don't already have Node.js installed, you can download it &lt;a href="https://nodejs.org/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start by creating a project folder:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;mkdir &lt;/span&gt;docker-node-mongo
   &lt;span class="nb"&gt;cd &lt;/span&gt;docker-node-mongo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Initialize a new Node.js project:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npm init &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Install the necessary dependencies. For this setup, we’ll need &lt;strong&gt;Express&lt;/strong&gt; for our web server and &lt;strong&gt;Mongoose&lt;/strong&gt; to interact with MongoDB:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npm &lt;span class="nb"&gt;install &lt;/span&gt;express mongoose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Create an &lt;code&gt;index.js&lt;/code&gt; file with a simple Express server and a MongoDB connection using Mongoose:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mongoose&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mongoose&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="c1"&gt;// MongoDB connection&lt;/span&gt;
   &lt;span class="nx"&gt;mongoose&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mongodb://mongo:27017/testdb&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="na"&gt;useNewUrlParser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="na"&gt;useUnifiedTopology&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Connected to MongoDB&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Failed to connect to MongoDB&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;

   &lt;span class="c1"&gt;// Routes&lt;/span&gt;
   &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Hello from Node.js and MongoDB&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;

   &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`App running at http://localhost:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This sets up a basic Express server and connects it to a MongoDB instance running on &lt;code&gt;mongodb://mongo:27017/testdb&lt;/code&gt;. Now let's Dockerize it.&lt;/p&gt;


&lt;h3&gt;
  
  
  Creating a Dockerfile for Node.js
&lt;/h3&gt;

&lt;p&gt;Next, we need to create a &lt;code&gt;Dockerfile&lt;/code&gt; that will define the environment for our Node.js app. A &lt;code&gt;Dockerfile&lt;/code&gt; is essentially a blueprint for building the Docker image that will contain your application.&lt;/p&gt;

&lt;p&gt;Create a file called &lt;code&gt;Dockerfile&lt;/code&gt; in the root of your project directory and add the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Use the official Node.js image from Docker Hub&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:16&lt;/span&gt;

&lt;span class="c"&gt;# Set the working directory inside the container&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="c"&gt;# Copy package.json and install dependencies&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="c"&gt;# Copy the rest of the app files&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;

&lt;span class="c"&gt;# Expose the port the app runs on&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 3000&lt;/span&gt;

&lt;span class="c"&gt;# Command to run the app&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["node", "index.js"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This &lt;code&gt;Dockerfile&lt;/code&gt; will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use the official Node.js image.&lt;/li&gt;
&lt;li&gt;Set the working directory to &lt;code&gt;/app&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Copy the &lt;code&gt;package.json&lt;/code&gt; and install the necessary dependencies.&lt;/li&gt;
&lt;li&gt;Copy the rest of the files and set the entry point to run the Node.js app.&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Setting up MongoDB and MongoExpress with Docker Compose
&lt;/h3&gt;

&lt;p&gt;Instead of running all services separately, we'll use &lt;strong&gt;Docker Compose&lt;/strong&gt; to define and manage our multi-container environment. Docker Compose allows us to define services, networks, and volumes in a &lt;code&gt;docker-compose.yml&lt;/code&gt; file, making it easy to orchestrate our entire stack. One thing to remember is that when using docker compose we don't need to create a network it automatically creates a network for our multiple containers defined in the yaml file.&lt;/p&gt;

&lt;p&gt;Create a &lt;code&gt;docker-compose.yml&lt;/code&gt; file in the project root:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;nodeapp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3000:3000'&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.:/app&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mongo&lt;/span&gt;
  &lt;span class="na"&gt;mongo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongo&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;27017:27017'&lt;/span&gt;
  &lt;span class="na"&gt;mongo-express&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongo-express&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;8081:8081'&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;ME_CONFIG_MONGODB_ADMINUSERNAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root&lt;/span&gt;
      &lt;span class="na"&gt;ME_CONFIG_MONGODB_ADMINPASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example&lt;/span&gt;
      &lt;span class="na"&gt;ME_CONFIG_MONGODB_SERVER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongo&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration defines three services:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;nodeapp&lt;/strong&gt;: Our Node.js application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mongo&lt;/strong&gt;: A MongoDB instance running on port &lt;code&gt;27017&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mongo-express&lt;/strong&gt;: A web-based interface to manage MongoDB, accessible on port &lt;code&gt;8081&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Running the Application
&lt;/h3&gt;

&lt;p&gt;With everything set up, let’s run the app using Docker Compose.&lt;/p&gt;

&lt;p&gt;Run the following command in your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To shut down the multiple containers currently using Docker Compose&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker Compose will pull the necessary images, build the Node.js app, and start all services. After the process completes, you should see logs from MongoDB, Node.js, and MongoExpress.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visit &lt;code&gt;http://localhost:3000&lt;/code&gt; to see the Node.js app running.&lt;/li&gt;
&lt;li&gt;Visit &lt;code&gt;http://localhost:8081&lt;/code&gt; to access MongoExpress and manage your database.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Connecting Node.js to MongoDB
&lt;/h3&gt;

&lt;p&gt;Our Node.js app is already set up to connect to MongoDB with the following connection string inside &lt;code&gt;index.js&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;mongoose&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mongodb://mongo:27017/testdb&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;useNewUrlParser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;useUnifiedTopology&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;mongo&lt;/code&gt; hostname refers to the MongoDB service defined in our &lt;code&gt;docker-compose.yml&lt;/code&gt;. Docker Compose automatically creates a network for the services, allowing them to communicate by their service names.&lt;/p&gt;




&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In this post, we’ve successfully set up a local development environment using Docker for &lt;strong&gt;Node.js&lt;/strong&gt;, &lt;strong&gt;MongoDB&lt;/strong&gt;, and &lt;strong&gt;MongoExpress&lt;/strong&gt;. Using Docker Compose, we orchestrated multiple containers to work together seamlessly, making it easier to spin up a fully functional stack for development.&lt;/p&gt;

&lt;p&gt;With this setup, you can easily add more services, manage your databases with MongoExpress, and have an isolated environment without needing to install MongoDB or other dependencies locally.&lt;/p&gt;

&lt;p&gt;Happy Reading!!! Please Like, save, share and follow!!!&lt;/p&gt;




</description>
      <category>docker</category>
      <category>mongodb</category>
      <category>containers</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to Publish a Java Artifact Built with Gradle to a Nexus Repository part 1</title>
      <dc:creator>Otobong Edoho</dc:creator>
      <pubDate>Wed, 28 Aug 2024 12:41:29 +0000</pubDate>
      <link>https://forem.com/otobong_edoho_7796fec1f41/how-to-publish-a-java-artifact-built-with-gradle-to-a-nexus-repository-part-1-17p6</link>
      <guid>https://forem.com/otobong_edoho_7796fec1f41/how-to-publish-a-java-artifact-built-with-gradle-to-a-nexus-repository-part-1-17p6</guid>
      <description>&lt;p&gt;&lt;strong&gt;What is a Nexus repository?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nexus repository is a publicly accessible repository manager from Sonatype that organizes, stores and helps in distribution of artifacts used in software development.&lt;/p&gt;

&lt;p&gt;It has built in support many for artifact formats such as Docker, Java, Go, PHP, Python etc.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Artifacts simply put refers to outcome of software build and packaging process &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Importance of Publishing Java Artifacts to a Central Repository&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Publishing Java artifacts to a central repository, like Nexus, is an important practice in modern software development for several reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Team Collaboration: In a team environment, a central repository allows developers to share libraries, frameworks, or any reusable code easily.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integration with CI/CD Pipelines: Publishing to a central repository is often a key step in CI/CD pipelines. Artifacts can be automatically deployed to a repository after a successful build, making them immediately available for testing or production deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Version Control: Central repositories allow developers to manage multiple versions of an artifact. This enables the use of specific versions, ensuring compatibility and stability across different project development.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Java Development Kit 17 installed.&lt;/li&gt;
&lt;li&gt;Gradle installed and configured.&lt;/li&gt;
&lt;li&gt;Access to a Nexus Repository (public or private).&lt;/li&gt;
&lt;li&gt;Basic understanding of Gradle build scripts (build.gradle).&lt;/li&gt;
&lt;li&gt;An EC2 Instance or any other Cloud Service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Install Nexus&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So before we install nexus we will setup an EC2 instance where we can install and run nexus.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx46mg954eeyc648601a5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx46mg954eeyc648601a5.png" alt="This is an EC2 instance"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To install Java on the instance &lt;/p&gt;

&lt;p&gt;&lt;code&gt;apt install openjdk-17-jre-headless&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;When that is done you can install nexus using this command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cd /opt
wget https://download.sonatype.com/nexus/3/nexus-3.71.0-06-unix.tar.gz
ls


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It comes as a tar ball so we need to extract it&lt;/p&gt;

&lt;p&gt;&lt;code&gt;tar -xvzf nexus-3.71.0-06-unix.tar.gz&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;ls&lt;/code&gt; and you should see this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk46jf8bnsg16xdwgoy8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk46jf8bnsg16xdwgoy8b.png" alt="terminal"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;ls -l&lt;/code&gt; to see the files and the owners, the root has ownership to the files so we need to give ownership to a nexus user which we will create. It is best practice to always create a user for any service we want to run.&lt;/p&gt;

&lt;p&gt;Let's create a nexus user and add the user to nexus group&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

adduser nexus
usermod -aG nexus nexus


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;while still in the /opt directory run this command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

chown -R nexus:nexus nexus-3.71.0-06
chown -R nexus:nexus sonatype-work


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftaj5kwey339s8s8xzly7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftaj5kwey339s8s8xzly7.png" alt="prompt terminal"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Set nexus config so it will run as a nexus user&lt;br&gt;
&lt;code&gt;vim nexus-3.71.0-06/bin/nexus.rc&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This should open up the file in vim edit the &lt;code&gt;run_as_user="nexus"&lt;/code&gt; like this&lt;/p&gt;

&lt;p&gt;Now switch to the nexus user and run the nexus service&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

su - nexus
/opt/nexus-3.71.0-06/bin/nexus start


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Use the command below to check the service and on which port running.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqhfqsjxipv80fvp9fs0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqhfqsjxipv80fvp9fs0.png" alt="terminal netstat"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see that nexus is running in port 8081, if the port is blocked on your EC2 instance you need to configure the security groups to allow inbound traffic on that port.&lt;/p&gt;

&lt;p&gt;Next Use the public IP address of your instance and the nexus port number like this&lt;br&gt;
&lt;code&gt;18.232.173.21:8081&lt;/code&gt;&lt;br&gt;
to be able to access the nexus interface from our browser &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj65as6zlgedj0z2ijxy6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj65as6zlgedj0z2ijxy6.png" alt="nexus interface"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To get the default password for the admin interface navigate to the following directory&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

/opt/sonatype-work/nexus-3.71.0-06/
cat admin.password


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Use that to login, the username is admin&lt;/p&gt;

&lt;p&gt;After logging in the admin interface should look like this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjn0j2s859drgpa5562zc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjn0j2s859drgpa5562zc.png" alt="repository nexus"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;maven-releases repository is where we store artifacts that have been tested and are ready to be deployed to production.&lt;/li&gt;
&lt;li&gt;maven-snapshots repository is where we store artifacts that are still in the development and test phase.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's say a developer in our company want to publish an artifact to nexus we can't let them use the admin account, we must create an account for that user&lt;/p&gt;

&lt;p&gt;Navigate to the &lt;code&gt;Users&lt;/code&gt; tab and &lt;code&gt;create local user&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fachr87obf9zecnwrueqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fachr87obf9zecnwrueqn.png" alt="Nexus users tab"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbnth0m28qwslktw6voo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbnth0m28qwslktw6voo.png" alt="Nexus user form"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to User roles and create a role for the created user&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1b5a4grud2ocxi3sqb5m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1b5a4grud2ocxi3sqb5m.png" alt="User role"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Best Practice: We don't want our user having too many privileges so we will only assign a role in which the user needs to carry out a task&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8v3aiert3xazaryontl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8v3aiert3xazaryontl.png" alt="role form"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are building with gradle in this case but the role still works the same as maven so we will set the following role&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31izgjpnzjfocqjixj2x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31izgjpnzjfocqjixj2x.png" alt="set role"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we Assign that role that we created to the user&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmogxfbyo2bcyqus84lju.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmogxfbyo2bcyqus84lju.png" alt="assign role"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configure Gradle with Nexus&lt;/strong&gt;&lt;br&gt;
Apply the following configurations and plugin to the build.gradle file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

group = 'com.example'
version = '1.0.0-SNAPSHOT'
sourceCompatibility = '17'
targetCompatibility = '17'

apply plugin: 'maven-publish'


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This code block publishes the artifact that we have built &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

publishing {
//  The Artifacts we are going to upload
    publications {
        maven(MavenPublication) {
            artifact("build/libs/java-react-example-${version}.jar") {
                extension 'jar'
            }
        }
    }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This code block defines the address that we are going to publish to.&lt;br&gt;
The &lt;code&gt;allowInsecureProtocol = true&lt;/code&gt;&lt;br&gt;
Allows the publishing to work regardless of the fact that we are not using &lt;code&gt;https&lt;/code&gt; &lt;br&gt;
The &lt;code&gt;credentials&lt;/code&gt; declares the user account that we created on our Nexus repository, so it will give access for our user to publish to the repository&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

// The Nexus repo that we will upload the Jar file to
    repositories {
        maven {
            name 'nexus'
            url "http://[Your Public IP]:8081/repository/maven-snapshots/"
//        So that it will all HTTP
            allowInsecureProtocol = true
            credentials {
                username project.repoUser
                password project.repoPassword
            }
        }
    }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let's create a &lt;code&gt;gradle.properties&lt;/code&gt; in our project file to store our user secret&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

repoUser = oty
repoPassword = xxxxxxxxxxx


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Create a &lt;code&gt;settings.gradle&lt;/code&gt; file if it doesn't exist and add the project name&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

rootProject.name ='java-react-example'


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4g6bzhddl2df1hm2ddad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4g6bzhddl2df1hm2ddad.png" alt="gradle settings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you haven't already built the project run then publish it&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

gradle build
gradle publish


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsh17r48jxarz3t6ejuur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsh17r48jxarz3t6ejuur.png" alt="gradle build"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9pk9uvskq8s34rvud0b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9pk9uvskq8s34rvud0b.png" alt="build jar"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww4apha7mn1txxsfuumb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww4apha7mn1txxsfuumb.png" alt="gradle publish"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can go to our nexus repository and view our publish artifact&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43p6eqgpczbbdm1163ei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43p6eqgpczbbdm1163ei.png" alt="repo publish"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy Reading!!!&lt;br&gt;
Please Like and Follow if You liked the Article.&lt;br&gt;
Thank You.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>operations</category>
      <category>development</category>
      <category>devsec</category>
    </item>
    <item>
      <title>Git Commit Hacks Every Developer Should Know</title>
      <dc:creator>Otobong Edoho</dc:creator>
      <pubDate>Tue, 20 Aug 2024 11:23:52 +0000</pubDate>
      <link>https://forem.com/otobong_edoho_7796fec1f41/git-commit-hacks-every-developer-should-know-249i</link>
      <guid>https://forem.com/otobong_edoho_7796fec1f41/git-commit-hacks-every-developer-should-know-249i</guid>
      <description>&lt;p&gt;&lt;strong&gt;1. Mastering Git Commit: The Foundation of Version Control&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The git commit command is the bedrock of Git. Each commit represents a snapshot of your project at a particular point in time. Here’s how to make the most of it:&lt;/p&gt;

&lt;p&gt;Atomic Commits&lt;br&gt;
Atomic commits are all about keeping each commit focused and self-contained. When you make small, focused commits, it becomes easier to track down the source of bugs and understand the evolution of the project. Each commit should ideally address one specific issue or feature.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git commit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Undoing Mistakes: Git Revert vs. Git Reset&lt;/strong&gt;&lt;br&gt;
Mistakes happen. When they do, Git provides powerful tools to help you undo them.&lt;/p&gt;

&lt;p&gt;Safe Reversal with git revert&lt;br&gt;
If you need to undo a commit without altering the commit history, git revert is your best friend. Unlike git reset, which changes your history, git revert creates a new commit that undoes the changes introduced by a previous commit.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git revert &amp;lt;id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is especially useful in shared repositories where altering history can cause issues for others.&lt;/p&gt;

&lt;p&gt;Hard Reset with &lt;code&gt;git reset --hard &amp;lt;id&amp;gt;&lt;/code&gt;&lt;br&gt;
On the other hand, &lt;code&gt;git reset --hard&lt;/code&gt; is a more drastic measure. It resets your current branch to the specified commit, discarding all changes in the working directory and the index&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git reset --hard &amp;lt;id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Navigating History: Leveraging Git Checkout&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;git checkout &amp;lt;id&amp;gt;&lt;/code&gt; is a versatile command that allows you to switch between branches or revisit specific commits.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout &amp;lt;id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Exploring Old Versions with &lt;code&gt;git checkout &amp;lt;id&amp;gt;&lt;/code&gt;&lt;br&gt;
Sometimes, you need to look at an older version of your code to understand how a feature was implemented or to test an earlier state of the project. You can temporarily switch to an older commit using.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Understanding Git Log: Navigating Your Project’s History&lt;/strong&gt;&lt;br&gt;
The git log command is an essential tool for any developer working with Git. It allows you to view the history of commits in a repository, providing a detailed log of all the changes made over time. Here's how you can make the most out of &lt;code&gt;git log&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By understanding these commands and when to use them, you can wield Git more effectively and maintain a clean, understandable project history.&lt;/p&gt;

&lt;p&gt;Happy Reading!!!!!!!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>webdev</category>
      <category>git</category>
      <category>github</category>
    </item>
    <item>
      <title>Setting Up Postgresql Database for Servers</title>
      <dc:creator>Otobong Edoho</dc:creator>
      <pubDate>Thu, 25 Jul 2024 21:23:53 +0000</pubDate>
      <link>https://forem.com/otobong_edoho_7796fec1f41/setting-up-postgresql-database-for-servers-3ao1</link>
      <guid>https://forem.com/otobong_edoho_7796fec1f41/setting-up-postgresql-database-for-servers-3ao1</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8n6rpxysxy2wn0dhiht.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8n6rpxysxy2wn0dhiht.png" alt="posgresql image" width="800" height="888"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;PostgreSQL is an advanced object relational database sytem that uses and also extends the SQL language. The good thing about postgreSQL that it is Open source.&lt;/p&gt;

&lt;p&gt;Developers and Database administrators alike use postgreSQL because of it's high data consistencey and data integrity which proves to be more reliable than other SQL databases.&lt;/p&gt;

&lt;p&gt;This is a stage 4 task that I worked on as HNG11 intern, It was a team based task, the team consisted of 6 members.&lt;/p&gt;

&lt;p&gt;This post will cover &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Installation of postgreSQL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creation of the Database and user&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configuring PostgreSQL&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;List of required tools and software (e.g., PostgreSQL, SSH client)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic knowledge required (e.g., familiarity with terminal commands, server access)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Install PostgreSQL along with its additional features&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install postgresql postgresql-contrib
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Switch to the PostgreSQL user to perform the needed administrative tasks&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo -i -u postgres

# opening  the sql prompt
psql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Within the PostgreSQL prompt, create the required databases for the production, staging and development environments&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE DATABASE langlearnai_be_staging_db;
CREATE DATABASE langlearnai_be_main_db;
CREATE DATABASE langlearnai_be_dev_db;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create Users and assign passwords for each enviroment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE USER langlearnai_be_staging_user WITH ENCRYPTED PASSWORD 'staging_password';
CREATE USER langlearnai_be_main_user WITH ENCRYPTED PASSWORD 'main_password';
CREATE USER langlearnai_be_dev_user WITH ENCRYPTED PASSWORD 'dev_password';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Grant the necessary Privileges to Users&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GRANT ALL PRIVILEGES ON DATABASE langlearnai_be_staging_db TO langlearnai_be_staging_user;
GRANT ALL PRIVILEGES ON DATABASE langlearnai_be_main_db TO langlearnai_be_main_user;
GRANT ALL PRIVILEGES ON DATABASE langlearnai_be_dev_db TO langlearnai_be_dev_user;
GRANT ALL PRIVILEGES ON SCHEMA public TO langlearnai_be_staging_user;
GRANT ALL PRIVILEGES ON SCHEMA public TO langlearnai_be_main_user;
GRANT ALL PRIVILEGES ON SCHEMA public TO langlearnai_be_dev_user;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Exit the PostgreSQL prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\q
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configure and modified the Postgres configuration files to allow PostgreSQL to Listen on External IP&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vim /etc/postgresql/13/main/postgresql.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Locate the listen_addresses line and change to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;listen_addresses = "*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open the pg_hba.conf file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vim /etc/postgresql/13/main/pg_hba.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following lines under IPV4 local connections to allow external access. i.e These lines configure PostgreSQL to accept connections from the specified IP address&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# IPv4 local connections
host    all             all             0.0.0.0/0               md5
host    postgres        postgres        &amp;lt;server-ip-address&amp;gt;/32      md5
host    langlearnai_be_dev_db  langlearnai_be_dev_user  &amp;lt;server-ip-address&amp;gt;/32    md5
host    langlearnai_be_main_db  langlearnai_be_main_user  &amp;lt;server-ip-address&amp;gt;/32    md5
host    langlearnai_be_staging_db  langlearnai_be_staging_user  &amp;lt;server-ip-address&amp;gt;/32    md5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart to apply changes and enable the PostgreSQL service to start on system boot&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart postgresql
sudo systemctl enable postgresql 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Allow connections on port through firewall&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ufw allow 5432/tcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Accessing the Database
&lt;/h2&gt;

&lt;p&gt;To connect to the PostgreSQL database remotely, use the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;psql -h your_server_ip -U your_database_username -d your_database_name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Happy Reading and Learning&lt;/p&gt;

</description>
      <category>devops</category>
      <category>webdev</category>
      <category>backend</category>
      <category>database</category>
    </item>
    <item>
      <title>Setting Up Postgresql Database for Servers</title>
      <dc:creator>Otobong Edoho</dc:creator>
      <pubDate>Thu, 25 Jul 2024 21:23:53 +0000</pubDate>
      <link>https://forem.com/otobong_edoho_7796fec1f41/setting-up-postgresql-database-for-servers-2bjg</link>
      <guid>https://forem.com/otobong_edoho_7796fec1f41/setting-up-postgresql-database-for-servers-2bjg</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8n6rpxysxy2wn0dhiht.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8n6rpxysxy2wn0dhiht.png" alt="posgresql image" width="800" height="888"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;PostgreSQL is an advanced object relational database sytem that uses and also extends the SQL language. The good thing about postgreSQL that it is Open source.&lt;/p&gt;

&lt;p&gt;Developers and Database administrators alike use postgreSQL because of it's high data consistencey and data integrity which proves to be more reliable than other SQL databases.&lt;/p&gt;

&lt;p&gt;This is a stage 4 task that I worked on as HNG11 intern, It was a team based task, the team consisted of 6 members.&lt;/p&gt;

&lt;p&gt;This post will cover &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Installation of postgreSQL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creation of the Database and user&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configuring PostgreSQL&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;List of required tools and software (e.g., PostgreSQL, SSH client)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic knowledge required (e.g., familiarity with terminal commands, server access)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Install PostgreSQL along with its additional features&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install postgresql postgresql-contrib
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Switch to the PostgreSQL user to perform the needed administrative tasks&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo -i -u postgres

# opening  the sql prompt
psql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Within the PostgreSQL prompt, create the required databases for the production, staging and development environments&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE DATABASE langlearnai_be_staging_db;
CREATE DATABASE langlearnai_be_main_db;
CREATE DATABASE langlearnai_be_dev_db;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create Users and assign passwords for each enviroment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE USER langlearnai_be_staging_user WITH ENCRYPTED PASSWORD 'staging_password';
CREATE USER langlearnai_be_main_user WITH ENCRYPTED PASSWORD 'main_password';
CREATE USER langlearnai_be_dev_user WITH ENCRYPTED PASSWORD 'dev_password';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Grant the necessary Privileges to Users&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GRANT ALL PRIVILEGES ON DATABASE langlearnai_be_staging_db TO langlearnai_be_staging_user;
GRANT ALL PRIVILEGES ON DATABASE langlearnai_be_main_db TO langlearnai_be_main_user;
GRANT ALL PRIVILEGES ON DATABASE langlearnai_be_dev_db TO langlearnai_be_dev_user;
GRANT ALL PRIVILEGES ON SCHEMA public TO langlearnai_be_staging_user;
GRANT ALL PRIVILEGES ON SCHEMA public TO langlearnai_be_main_user;
GRANT ALL PRIVILEGES ON SCHEMA public TO langlearnai_be_dev_user;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Exit the PostgreSQL prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\q
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configure and modified the Postgres configuration files to allow PostgreSQL to Listen on External IP&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vim /etc/postgresql/13/main/postgresql.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Locate the listen_addresses line and change to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;listen_addresses = "*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open the pg_hba.conf file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vim /etc/postgresql/13/main/pg_hba.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following lines under IPV4 local connections to allow external access. i.e These lines configure PostgreSQL to accept connections from the specified IP address&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# IPv4 local connections
host    all             all             0.0.0.0/0               md5
host    postgres        postgres        &amp;lt;server-ip-address&amp;gt;/32      md5
host    langlearnai_be_dev_db  langlearnai_be_dev_user  &amp;lt;server-ip-address&amp;gt;/32    md5
host    langlearnai_be_main_db  langlearnai_be_main_user  &amp;lt;server-ip-address&amp;gt;/32    md5
host    langlearnai_be_staging_db  langlearnai_be_staging_user  &amp;lt;server-ip-address&amp;gt;/32    md5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart to apply changes and enable the PostgreSQL service to start on system boot&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart postgresql
sudo systemctl enable postgresql 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Allow connections on port through firewall&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ufw allow 5432/tcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Accessing the Database
&lt;/h2&gt;

&lt;p&gt;To connect to the PostgreSQL database remotely, use the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;psql -h your_server_ip -U your_database_username -d your_database_name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Happy Reading and Learning&lt;/p&gt;

</description>
      <category>devops</category>
      <category>webdev</category>
      <category>backend</category>
      <category>database</category>
    </item>
  </channel>
</rss>
