<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Arata</title>
    <description>The latest articles on Forem by Arata (@aratax).</description>
    <link>https://forem.com/aratax</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/aratax"/>
    <language>en</language>
    <item>
      <title>Building a Multi-Node Kubernetes Cluster with Vagrant</title>
      <dc:creator>Arata</dc:creator>
      <pubDate>Mon, 10 Nov 2025 07:49:56 +0000</pubDate>
      <link>https://forem.com/aratax/building-a-multi-node-kubernetes-cluster-with-vagrant-o0h</link>
      <guid>https://forem.com/aratax/building-a-multi-node-kubernetes-cluster-with-vagrant-o0h</guid>
      <description>&lt;p&gt;"In distributed systems, consistency isn’t just a property — it’s a promise."&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Article
&lt;/h2&gt;

&lt;p&gt;Imagine you’re building a small banking application. Users can deposit and withdraw money, check their balances, and expect data accuracy every single time — even if multiple requests hit the system simultaneously. But the moment you deploy it across containers, networks, and replicas, one question starts haunting every architect:&lt;/p&gt;

&lt;p&gt;How do we keep data consistent when everything is happening everywhere?&lt;/p&gt;

&lt;p&gt;In this tutorial, we’ll explore that question through a hands-on story — from concept to infrastructure — and deploy a Spring Boot + PostgreSQL banking demo across a five-node Kubernetes lab, fully automated with Vagrant. Our goal isn’t to ship production code, but to understand the design thinking behind consistency, locking, and automation.&lt;/p&gt;

&lt;p&gt;Most tutorials use &lt;strong&gt;Minikube&lt;/strong&gt; or &lt;strong&gt;kind&lt;/strong&gt;, which are great for learning but limited to single-node simulations.&lt;br&gt;&lt;br&gt;
What if you could spin up a &lt;strong&gt;full Kubernetes cluster&lt;/strong&gt; — control plane, multiple worker nodes, real networking, storage, and ingress — entirely automated and reproducible?&lt;/p&gt;

&lt;p&gt;It’s a perfect local lab for experimenting with deployments, storage, and load testing — without relying on cloud services.&lt;/p&gt;


&lt;h2&gt;
  
  
  What You’ll Learn
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Build a &lt;strong&gt;5-node Kubernetes cluster&lt;/strong&gt; using Vagrant and VirtualBox&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automate provisioning with &lt;strong&gt;Bash scripts&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy a real &lt;strong&gt;Spring Boot + Postgres&lt;/strong&gt; application&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test the rest endpoint using &lt;strong&gt;k6&lt;/strong&gt; load testing&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Build, deploy, and test a real multi-node Kubernetes cluster from scratch — all on your local machine.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Design
&lt;/h2&gt;

&lt;p&gt;The system provides RESTful endpoints for withdrawal and deposit operations, served by a Spring Boot–based API backend. When the API receives a client request, it updates the account balance in a PostgreSQL relational database. To ensure data consistency under concurrent transactions, the system supports both optimistic and pessimistic locking mechanisms.&lt;/p&gt;

&lt;p&gt;Both the API backend and the PostgreSQL database are deployed on a Kubernetes cluster comprising five virtual machines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;One control plane node for cluster management&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;One edge node for network routing and ingress&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Two worker nodes hosting the Spring Boot web applications&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;One database node running PostgreSQL&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a visual representation of the cluster setup:&lt;/p&gt;
&lt;h3&gt;
  
  
  K8S Architecture Overview
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────────────────────────────┐
│                    Kubernetes Cluster                       │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  Control Plane (k8s-cp-01)                                  │
│  └─ IP: 192.168.56.10                                       │
│  └─ Role: Master node, API server, scheduler, controller    │
│                                                             │
│  Worker Nodes:                                              │
│  ├─ k8s-node-01 (192.168.56.11) - tier: edge                │
│  │  └─ Ingress Controller, Local Path Provisioner, MetalLB  │
│  ├─ k8s-node-02 (192.168.56.12) - tier: backend             │
│  │  └─ Spring Boot Application Pods                         │
│  ├─ k8s-node-03 (192.168.56.13) - tier: backend             │
│  │  └─ Spring Boot Application Pods                         │
│  └─ k8s-node-04 (192.168.56.14) - tier: database            │
│     └─ PostgreSQL Database                                  │
│                                                             │
│  LoadBalancer IP Pool: 192.168.56.240-250                   │
└─────────────────────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Building the Complete Environment:
&lt;/h2&gt;

&lt;p&gt;The environment will be provisioned using Vagrant, which automates the creation of virtual machines and the setup of the Kubernetes cluster. Once the infrastructure is ready, it will deploy the prebuilt cloud-native Spring Boot web application and the fully configured PostgreSQL database, assembling the complete application service environment.&lt;/p&gt;
&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;The following tools are required on your host machine:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Version&lt;/th&gt;
&lt;th&gt;Install&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;VirtualBox&lt;/td&gt;
&lt;td&gt;≥ 7.1.6&lt;/td&gt;
&lt;td&gt;See install checklist&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vagrant&lt;/td&gt;
&lt;td&gt;≥ 2.4.9&lt;/td&gt;
&lt;td&gt;See install checklist&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RAM&lt;/td&gt;
&lt;td&gt;≥ 13 GB&lt;/td&gt;
&lt;td&gt;3GB for cp + 2 GB per node&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CPU&lt;/td&gt;
&lt;td&gt;≥ 4 cores&lt;/td&gt;
&lt;td&gt;Recommended&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Network&lt;/td&gt;
&lt;td&gt;192.168.56.0/24&lt;/td&gt;
&lt;td&gt;VirtualBox Host-Only&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Below is the installation command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Windows (Chocolatey, run admin PowerShell)&lt;/span&gt;
choco &lt;span class="nb"&gt;install &lt;/span&gt;virtualbox vagrant

&lt;span class="c"&gt;# Ubuntu/Debian&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; virtualbox
&lt;span class="c"&gt;# Get latest vagrant from HashiCorp website or apt repo&lt;/span&gt;
&lt;span class="c"&gt;# Install HashiCorp GPG key&lt;/span&gt;
wget &lt;span class="nt"&gt;-O-&lt;/span&gt; https://apt.releases.hashicorp.com/gpg | &lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /usr/share/keyrings/hashicorp-archive-keyring.gpg
&lt;span class="c"&gt;# Add HashiCorp repository&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
https://apt.releases.hashicorp.com &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;lsb_release &lt;span class="nt"&gt;-cs&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; main"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/hashicorp.list
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; vagrant
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 1: Vagrant
&lt;/h2&gt;

&lt;p&gt;Clone the project&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/arata-x/vagrant-k8s-bank-demo.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Project Structure
&lt;/h3&gt;

&lt;p&gt;The outline of the project structure is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Root:
│  Vagrantfile
│
└─provision
    ├─deployment
    │  │
    │  └─standard
    │      ├─app
    │      │      10-config.yml
    │      │      20-rbac.yml
    │      │      30-db-deploy.yml
    │      │      40-app-deploy.yml
    │      │      50-services.yml
    │      │      60-network-policy.yml
    │      │      70-utilities.yml
    │      │
    │      └─infra
    │              10-storage-class.yml
    │              20-metallb.yaml
    │
    └─foundation
            10-common.sh
            20-node-network.sh
            30-control-panel.sh
            40-join-node.sh
            50-after-vagrant-setup.sh
            join-command.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A &lt;strong&gt;Vagrantfile&lt;/strong&gt; is a configuration file written in Ruby syntax that defines how Vagrant should provision and manage a virtual machine (VM). It’s the heart of any Vagrant project—used to automate the setup of reproducible development environments.&lt;/p&gt;

&lt;p&gt;Vagrantfile specifies:&lt;/p&gt;

&lt;p&gt;-Base OS image (e.g., ubuntu/jammy64) -Resources (CPU, memory, disk) -Networking (port forwarding, private/public networks) -Provisioning scripts (e.g., install Java, Maven, Docker) -Shared folders between host and VM&lt;/p&gt;

&lt;p&gt;Below is the content of Vagrantfile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="no"&gt;Vagrant&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"2"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;box&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu/jammy64"&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synced_folder&lt;/span&gt; &lt;span class="s2"&gt;"."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"/vagrant"&lt;/span&gt;
  &lt;span class="n"&gt;root_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"provision/foundation/"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="c1"&gt;# Common setup for all nodes&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provision&lt;/span&gt; &lt;span class="s2"&gt;"shell"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;path: &lt;/span&gt;&lt;span class="n"&gt;root_path&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"10-common.sh"&lt;/span&gt;
  &lt;span class="c1"&gt;# Node definitions&lt;/span&gt;
  &lt;span class="n"&gt;nodes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="ss"&gt;name: &lt;/span&gt;&lt;span class="s2"&gt;"k8s-cp-01"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="s2"&gt;"192.168.56.10"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;script: &lt;/span&gt;&lt;span class="s2"&gt;"30-control-panel.sh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;memory: &lt;/span&gt;&lt;span class="mi"&gt;3072&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="ss"&gt;name: &lt;/span&gt;&lt;span class="s2"&gt;"k8s-node-01"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="s2"&gt;"192.168.56.11"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;script: &lt;/span&gt;&lt;span class="s2"&gt;"40-join-node.sh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="ss"&gt;memory: &lt;/span&gt;&lt;span class="mi"&gt;2048&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="ss"&gt;name: &lt;/span&gt;&lt;span class="s2"&gt;"k8s-node-02"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="s2"&gt;"192.168.56.12"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;script: &lt;/span&gt;&lt;span class="s2"&gt;"40-join-node.sh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="ss"&gt;memory: &lt;/span&gt;&lt;span class="mi"&gt;2048&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="ss"&gt;name: &lt;/span&gt;&lt;span class="s2"&gt;"k8s-node-03"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="s2"&gt;"192.168.56.13"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;script: &lt;/span&gt;&lt;span class="s2"&gt;"40-join-node.sh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="ss"&gt;memory: &lt;/span&gt;&lt;span class="mi"&gt;2048&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="ss"&gt;name: &lt;/span&gt;&lt;span class="s2"&gt;"k8s-node-04"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="s2"&gt;"192.168.56.14"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;script: &lt;/span&gt;&lt;span class="s2"&gt;"40-join-node.sh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="ss"&gt;memory: &lt;/span&gt;&lt;span class="mi"&gt;2048&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="c1"&gt;# Create VMs&lt;/span&gt;
  &lt;span class="n"&gt;nodes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;each&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;define&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:name&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;node_vm&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
      &lt;span class="n"&gt;node_vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"virtualbox"&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
        &lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cpus&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
        &lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:memory&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;
      &lt;span class="n"&gt;node_vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hostname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:name&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="n"&gt;node_vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;network&lt;/span&gt; &lt;span class="s2"&gt;"private_network"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:ip&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="n"&gt;node_vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provision&lt;/span&gt; &lt;span class="s2"&gt;"shell"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;path: &lt;/span&gt;&lt;span class="n"&gt;root_path&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"20-node-network.sh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;args: &lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:ip&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;      
      &lt;span class="n"&gt;node_vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provision&lt;/span&gt; &lt;span class="s2"&gt;"shell"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;path: &lt;/span&gt;&lt;span class="n"&gt;root_path&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:script&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  1️⃣ Overview
&lt;/h3&gt;

&lt;p&gt;This Vagrantfile provisions :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Creates 5 Ubuntu 22.04 VMs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Installs Docker, kubeadm, kubelet, kubectl&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Initializes Kubernetes control plane&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Joins 4 worker nodes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configures Calico CNI networking&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2️⃣ Global Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="no"&gt;Vagrant&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"2"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;box&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu/jammy64"&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synced_folder&lt;/span&gt; &lt;span class="s2"&gt;"."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"/vagrant"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Vagrant.configure("2")&lt;/code&gt; → Uses configuration syntax version 2.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;config.vm.box&lt;/code&gt; → Every VM uses Ubuntu 22.04 LTS (“jammy64”).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;config.vm.synced_folder&lt;/code&gt; → Shares your project folder on the host with each guest VM at &lt;code&gt;/vagrant&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3️⃣ Common Provisioning
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;  &lt;span class="n"&gt;root_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"provision/foundation/"&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provision&lt;/span&gt; &lt;span class="s2"&gt;"shell"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;path: &lt;/span&gt;&lt;span class="n"&gt;root_path&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"10-common.sh"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Runs &lt;strong&gt;once globally&lt;/strong&gt; for all machines to install baseline packages, set host files, etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  4️⃣ Cluster node setup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;nodes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="ss"&gt;name: &lt;/span&gt;&lt;span class="s2"&gt;"k8s-cp-01"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="s2"&gt;"192.168.56.10"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;script: &lt;/span&gt;&lt;span class="s2"&gt;"30-control-panel.sh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;memory: &lt;/span&gt;&lt;span class="mi"&gt;3072&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="ss"&gt;name: &lt;/span&gt;&lt;span class="s2"&gt;"k8s-node-01"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="s2"&gt;"192.168.56.11"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;script: &lt;/span&gt;&lt;span class="s2"&gt;"40-join-node.sh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="ss"&gt;memory: &lt;/span&gt;&lt;span class="mi"&gt;2048&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="ss"&gt;name: &lt;/span&gt;&lt;span class="s2"&gt;"k8s-node-02"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="s2"&gt;"192.168.56.12"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;script: &lt;/span&gt;&lt;span class="s2"&gt;"40-join-node.sh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="ss"&gt;memory: &lt;/span&gt;&lt;span class="mi"&gt;2048&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="ss"&gt;name: &lt;/span&gt;&lt;span class="s2"&gt;"k8s-node-03"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="s2"&gt;"192.168.56.13"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;script: &lt;/span&gt;&lt;span class="s2"&gt;"40-join-node.sh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="ss"&gt;memory: &lt;/span&gt;&lt;span class="mi"&gt;2048&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="ss"&gt;name: &lt;/span&gt;&lt;span class="s2"&gt;"k8s-node-04"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="s2"&gt;"192.168.56.14"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;script: &lt;/span&gt;&lt;span class="s2"&gt;"40-join-node.sh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="ss"&gt;memory: &lt;/span&gt;&lt;span class="mi"&gt;2048&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Defines five nodes for node creation loop.&lt;/p&gt;

&lt;h3&gt;
  
  
  5️⃣ Node Creation Loop
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;nodes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;each&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;define&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:name&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;node_vm&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
    &lt;span class="n"&gt;node_vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"virtualbox"&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
      &lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cpus&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
      &lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:memory&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
    &lt;span class="n"&gt;node_vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hostname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:name&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;node_vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;network&lt;/span&gt; &lt;span class="s2"&gt;"private_network"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:ip&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;node_vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provision&lt;/span&gt; &lt;span class="s2"&gt;"shell"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;path: &lt;/span&gt;&lt;span class="n"&gt;root_path&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"20-node-network.sh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;args: &lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:ip&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;      
    &lt;span class="n"&gt;node_vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provision&lt;/span&gt; &lt;span class="s2"&gt;"shell"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;path: &lt;/span&gt;&lt;span class="n"&gt;root_path&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:script&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For each node:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Allocates&lt;/strong&gt; cpus = 2 and memory&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Defines&lt;/strong&gt; a named VM.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Sets hostname&lt;/strong&gt; inside the guest.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configures a private network&lt;/strong&gt; on &lt;code&gt;192.168.56.0/24&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Runs&lt;/strong&gt; &lt;code&gt;setup-node-network.sh&lt;/code&gt; to configure IPs, &lt;code&gt;/etc/hosts&lt;/code&gt;, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Runs role-specific script&lt;/strong&gt; (&lt;code&gt;setup-controller.sh&lt;/code&gt; or &lt;code&gt;setup-node.sh&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  6️⃣ Build the Cluster
&lt;/h3&gt;

&lt;p&gt;Run vagrant up to start provisioning.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;k8s
vagrant up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🕒 &lt;strong&gt;Expected duration:&lt;/strong&gt; 10–15 minutes.&lt;/p&gt;

&lt;p&gt;Verify all VMs are running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vagrant status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  7️⃣ &lt;strong&gt;Provisioning Scripts Deep Dive&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let's take a closer look at the shell scripts used during provisioning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10-common.sh&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The script configures the operating system’s memory settings, installs the required Kubernetes components, and sets up the environment for Kubernetes networking.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Disable Swap&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;swapoff &lt;span class="nt"&gt;-a&lt;/span&gt;
&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'/ swap / s/^/#/'&lt;/span&gt; /etc/fstab

&lt;span class="c"&gt;# Install Core Dependencies&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; kubelet kubeadm containerd

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;modprobe overlay
&lt;span class="nb"&gt;sudo &lt;/span&gt;modprobe br_netfilter

&lt;span class="nb"&gt;sudo cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;sysctl &lt;span class="nt"&gt;--system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Purpose&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Disable Swap&lt;/strong&gt; Ensures Kubernetes can accurately manage memory resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Add Kubernetes Repository and Install Components&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Configures the official Kubernetes APT repo and installs kubelet, kubeadm, and containerd.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Setup Kernel Modules&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Enables overlay and br_netfilter modules required for container networking and storage layers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Set Kernel Parameters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Adjusts sysctl settings to enable IP forwarding and proper packet handling between bridged interfaces.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;20-common.sh&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This script configures the Kubernetes node’s network identity by explicitly assigning its IP address to the kubelet service, ensuring proper communication and cluster registration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;NODE_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;ip &lt;span class="nt"&gt;-4&lt;/span&gt; addr show enp0s8 | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-oP&lt;/span&gt; &lt;span class="s1"&gt;'(?&amp;lt;=inet\s)\d+(\.\d+){3}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="nv"&gt;DROPIN_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="s2"&gt;"--node-ip=&lt;/span&gt;&lt;span class="nv"&gt;$NODE_IP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DROPIN_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"0,/^Environment=&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;KUBELET_KUBECONFIG_ARGS=/s|&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$|&lt;/span&gt;&lt;span class="s2"&gt; --node-ip=&lt;/span&gt;&lt;span class="nv"&gt;$NODE_IP&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;|"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DROPIN_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;fi

&lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reexec
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart kubelet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When using Vagrant, a default NAT interface (enp0s3) is created for outbound network access. A second, user-defined network interface (enp0s8) is typically added for internal cluster communication. However, Kubernetes may fail to correctly resolve the node’s IP address in this setup, requiring manual configuration.&lt;/p&gt;

&lt;p&gt;After testing, the following approach proves effective: explicitly assign the node IP to the enp0s8 interface and configure the kubelet to use this IP. Once applied, the kubelet service starts with the correct node IP address, ensuring reliable communication between cluster components and accurate node registration within the Kubernetes control plane.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;30-contol-panel.sh&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This script automates the setup of a Kubernetes control plane node in a virtualized environment. It also takes care of installing and configuring essential tools like kubectl, the Container Network Interface (CNI), and a monitoring stack to give you full visibility into your cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; kubectl

&lt;span class="c"&gt;# Initialize cluster&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm init &lt;span class="nt"&gt;--apiserver-advertise-address&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;192.168.56.10 &lt;span class="nt"&gt;--pod-network-cidr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.224.0.0/16

&lt;span class="c"&gt;# Setup kubeconfig&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /home/vagrant/.kube
&lt;span class="nb"&gt;cp&lt;/span&gt; /etc/kubernetes/admin.conf /home/vagrant/.kube/config
&lt;span class="nb"&gt;chown &lt;/span&gt;vagrant:vagrant /home/vagrant/.kube/config
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; ~/.kube
&lt;span class="nb"&gt;cp&lt;/span&gt; /etc/kubernetes/admin.conf ~/.kube/config

&lt;span class="c"&gt;# Create join command&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm token create &lt;span class="nt"&gt;--print-join-command&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /vagrant/provision/foundation/join-command.sh

&lt;span class="c"&gt;# Install Calico network plugin&lt;/span&gt;
&lt;span class="nv"&gt;AUTO_METHOD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"cidr=192.168.56.0/24"&lt;/span&gt;
curl &lt;span class="nt"&gt;-O&lt;/span&gt; https://raw.githubusercontent.com/projectcalico/calico/v3.30.3/manifests/calico.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; calico.yaml
kubectl &lt;span class="nb"&gt;set env &lt;/span&gt;daemonset/calico-node &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nv"&gt;IP_AUTODETECTION_METHOD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$AUTO_METHOD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart kubelet

&lt;span class="c"&gt;# Install K9s&lt;/span&gt;
wget https://github.com/derailed/k9s/releases/latest/download/k9s_linux_amd64.deb
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; ./k9s_linux_amd64.deb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;1. Installs kubectl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Initializes the Kubernetes cluster using kubeadm, specifying the API server advertise address (192.168.56.10) and the Pod network CIDR (10.224.0.0/16).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Kubeconfig Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sets up the Kubernetes configuration (admin.conf) for both the vagrant user and the root user, enabling access to cluster management commands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Node Join Command Generation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Creates and stores the cluster join command in /vagrant/provision/foundation/join-command.sh for worker nodes to join the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Calico Network Plugin Setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Downloads and applies the Calico manifest to enable networking between pods. Configures Calico’s IP autodetection method to use the local network (cidr=192.168.56.0/24).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Kubernetes Management Tool Installation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Installs k9s, a terminal-based Kubernetes cluster management tool&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Post-Init Setup
&lt;/h2&gt;

&lt;p&gt;SSH into the control plane:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vagrant ssh k8s-cp-01
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the post-initialization script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; /vagrant/provision/foundation/50-after-vagrant-setup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;50-after-vagrant-setup&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This script configures functional labels to define node roles(edge, backend, database), and configures essential Kubernetes components with targeted scheduling.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;NODES&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;k8s-node-01 k8s-node-02 k8s-node-03 k8s-node-04&lt;span class="o"&gt;)&lt;/span&gt;
kubectl label node &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODES&lt;/span&gt;&lt;span class="p"&gt;[0]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; node-role.kubernetes.io/worker-node&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt; &lt;span class="nv"&gt;tier&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;edge &lt;span class="nt"&gt;--overwrite&lt;/span&gt;
kubectl label node &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODES&lt;/span&gt;&lt;span class="p"&gt;[1]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; node-role.kubernetes.io/worker-node&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt; &lt;span class="nv"&gt;tier&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;backend &lt;span class="nt"&gt;--overwrite&lt;/span&gt;
kubectl label node &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODES&lt;/span&gt;&lt;span class="p"&gt;[2]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; node-role.kubernetes.io/worker-node&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt; &lt;span class="nv"&gt;tier&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;backend &lt;span class="nt"&gt;--overwrite&lt;/span&gt;
kubectl label node &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODES&lt;/span&gt;&lt;span class="p"&gt;[3]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; node-role.kubernetes.io/worker-node&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt; &lt;span class="nv"&gt;tier&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;database &lt;span class="nt"&gt;--overwrite&lt;/span&gt;
&lt;span class="c"&gt;# Install Local Path Provisioner for dynamic storage provisioning&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
&lt;span class="c"&gt;# Install NGINX Ingress Controller&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml
&lt;span class="c"&gt;# Install MetalLB&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/metallb/metallb/v0.15.2/config/manifests/metallb-native.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After labeling, it deploys several core infrastructure components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Local Path Provisioner – Enables dynamic storage provisioning&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;NGINX Ingress Controller – Provides ingress routing&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;MetalLB – Implements Layer 2 load balancing with controller deployment&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes &lt;span class="nt"&gt;--show-labels&lt;/span&gt;
kubectl get pods &lt;span class="nt"&gt;-A&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 3: Deploy Infrastructure
&lt;/h2&gt;

&lt;p&gt;Apply storage and networking configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run Deployment&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; /vagrant/provision/deployment/standard/infra
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What’s Inside
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;10-storage-class.yml&lt;/code&gt; — Local path dynamic PV provisioning&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;20-metallb.yaml&lt;/code&gt; — IP pool and L2Advertisement setup&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Confiuration Review
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;10-storage-class.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;storage.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;StorageClass&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vm-storage&lt;/span&gt;
&lt;span class="na"&gt;provisioner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rancher.io/local-path&lt;/span&gt;
&lt;span class="na"&gt;volumeBindingMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WaitForFirstConsumer&lt;/span&gt;
&lt;span class="na"&gt;reclaimPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Delete&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This manifest defines a StorageClass named &lt;code&gt;vm-storage&lt;/code&gt; that uses the Rancher Local Path Provisioner to dynamically create node-local PersistentVolumes. It sets &lt;code&gt;volumeBindingMode: WaitForFirstConsumer&lt;/code&gt; so volume provisioning is deferred until a pod is scheduled, ensuring the PV is created on the same node as the workload. The &lt;code&gt;reclaimPolicy: Delete&lt;/code&gt; cleans up underlying storage when the PersistentVolumeClaim is removed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;20-metallb.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metallb.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IPAddressPool&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default-address-pool&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metallb-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;192.168.56.240-192.168.56.250&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metallb.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;L2Advertisement&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default-l2-advert&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metallb-system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In cloud environments, services like AWS or GCP automatically provide load balancers to expose your applications to the outside world. But on bare-metal or virtualized Kubernetes clusters, you don’t get that luxury out of the box — and that’s where MetalLB steps in.&lt;/p&gt;

&lt;p&gt;The manifest configures &lt;code&gt;MetalLB&lt;/code&gt; to handle external traffic just like a cloud load balancer would. It defines an IPAddressPool that allocates IPs from &lt;code&gt;192.168.56.240–192.168.56.250&lt;/code&gt;, and an &lt;code&gt;L2Advertisement&lt;/code&gt; that announces those addresses at Layer 2 so other devices on the network can reach your services directly.&lt;/p&gt;

&lt;p&gt;The result is seamless, cloud-like load balancing for your on-premises or Vagrant-based Kubernetes setups — giving your local cluster the same networking power as a managed one.&lt;/p&gt;

&lt;p&gt;Verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get storageclass
kubectl get ipaddresspool &lt;span class="nt"&gt;-n&lt;/span&gt; metallb-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 4: Deploy the Application
&lt;/h2&gt;

&lt;p&gt;In this step, we’re bringing everything together — deploying a complete multi-tier application stack on Kubernetes. This manifests sets up dedicated namespaces, injects configuration data, and applies the necessary RBAC permissions for secure access control. It also provisions a PostgreSQL database backed by persistent storage, then deploys a Spring Boot application with multiple replicas for scalability and resilience.&lt;/p&gt;

&lt;p&gt;To make the services accessible, it exposes them through ClusterIP and NodePort, and strengthens cluster security with NetworkPolicies that control how pods communicate. Optionally, it can also install monitoring and maintenance utilities, giving you full visibility and manageability of your application stack — all running seamlessly inside Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run Deployment&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; /vagrant/provision/deployment/standard/app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What’s Inside
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;10-config.yml&lt;/code&gt; — Namespaces, ConfigMaps, Secrets&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;20-rbac.yml&lt;/code&gt; — RBAC setup&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;30-db-deploy.yml&lt;/code&gt; — PostgreSQL with PVC&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;40-app-deply.yml&lt;/code&gt; — Spring Boot app (2 replicas)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;50-services.yml&lt;/code&gt; — ClusterIP and NodePort&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;60-network-policy.yml&lt;/code&gt; — Secure traffic rules&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;70-utilities.yml&lt;/code&gt; — Optional utilities&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Confiuration Review
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;10-config.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;demo&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creates a &lt;code&gt;demo&lt;/code&gt; namespace for isolating application resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;20-rbac.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RoleBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;database-rolebinding&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres-sa&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
&lt;span class="na"&gt;roleRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Role&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;database-role&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RoleBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-rolebinding&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-sa&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
&lt;span class="na"&gt;roleRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Role&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-role&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The manifest defines two service accounts: &lt;code&gt;database-sa&lt;/code&gt; for PostgreSQL and &lt;code&gt;app-sa&lt;/code&gt; for the Spring Boot service, enabling least-privilege access and clear separation of duties.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;30-db-deploy.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db-cm&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="na"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;appdb&lt;/span&gt;
  &lt;span class="na"&gt;APP_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;appuser&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db-secret&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;strong-password&lt;/span&gt;
  &lt;span class="na"&gt;APP_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;strong-password&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db-init&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dem&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;00-roles.sql&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(skip)&lt;/span&gt;
  &lt;span class="na"&gt;01-db.sql&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(skip)&lt;/span&gt;
  &lt;span class="na"&gt;02-schema.sql&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(skip)&lt;/span&gt;
  &lt;span class="na"&gt;03-comments.sql&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(skip)&lt;/span&gt;
  &lt;span class="na"&gt;04-table.sql&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;(skip)&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres-headless&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;      
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;clusterIP&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;None&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pg&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5432&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5432&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pg&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5432&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5432&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;StatefulSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres-headless&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;serviceAccountName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;database-sa&lt;/span&gt;
      &lt;span class="na"&gt;nodeSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;tier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;database&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres:18&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5432&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pg&lt;/span&gt;
          &lt;span class="na"&gt;envFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;secretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db-secret&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;configMapRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db-cm&lt;/span&gt;
          &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db-data&lt;/span&gt;
              &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/postgresql/&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;run-socket&lt;/span&gt;
              &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/run/postgresql&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db-init&lt;/span&gt;
              &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/docker-entrypoint-initdb.&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;run-socket&lt;/span&gt;
          &lt;span class="na"&gt;emptyDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db-init&lt;/span&gt;
          &lt;span class="na"&gt;configMap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db-init&lt;/span&gt;
  &lt;span class="na"&gt;volumeClaimTemplates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db-data&lt;/span&gt;
      &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ReadWriteOnce"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vm-storage&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5Gi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The manifest provisions a &lt;code&gt;single-replica&lt;/code&gt; PostgreSQL database as a &lt;code&gt;StatefulSet&lt;/code&gt; on the &lt;code&gt;database&lt;/code&gt; tier. It uses the &lt;code&gt;database-sa&lt;/code&gt; service account, loads environment variables and credentials from ConfigMaps and Secret, runs optional init SQL from a ConfigMap, and persists data via a PersistentVolumeClaim using the &lt;code&gt;vm-storage&lt;/code&gt; StorageClass, and exposes the database through two Kubernetes Services: a standard ClusterIP Service &lt;code&gt;postgres&lt;/code&gt; for in-cluster access on port &lt;code&gt;5432&lt;/code&gt;, and a headless Service &lt;code&gt;postgres-headless&lt;/code&gt; to enable direct pod-to-pod communication and stable DNS resolution—typically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;40-app-deply.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;springboot-cm&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;BPL_JVM_THREAD_COUNT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;100"&lt;/span&gt;
  &lt;span class="na"&gt;JAVA_TOOL_OPTIONS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-XX:InitialRAMPercentage=25.0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-XX:MaxRAMPercentage=75.0"&lt;/span&gt;
  &lt;span class="na"&gt;LOGGING_LEVEL_ROOT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;INFO&lt;/span&gt;
  &lt;span class="na"&gt;SPRING_PROFILES_ACTIVE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prod&lt;/span&gt;
  &lt;span class="na"&gt;SPRING_DATASOURCE_URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;jdbc:postgresql://postgres.demo.svc.cluster.local:5432/appdb"&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;springboot-secret&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;  
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;spring.datasource.username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;appuser&lt;/span&gt;
  &lt;span class="na"&gt;spring.datasource.password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;strong-password&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api-svc&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bank-account-demo&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;     
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;serviceAccountName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-sa&lt;/span&gt;
      &lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;nodeAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;nodeSelectorTerms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchExpressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tier&lt;/span&gt;
                &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
                &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
      &lt;span class="na"&gt;topologySpreadConstraints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;maxSkew&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
        &lt;span class="na"&gt;topologyKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes.io/hostname&lt;/span&gt;
        &lt;span class="na"&gt;whenUnsatisfiable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DoNotSchedule&lt;/span&gt;
        &lt;span class="na"&gt;labelSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bank-account-demo&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker.io/aratax/bank-account-demo:1.0&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
          &lt;span class="na"&gt;envFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;configMapRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;springboot-cm&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;secretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;springboot-secret&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;wait-for-database&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sh'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-c'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;until&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;nc&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;postgres.demo.svc.cluster.local&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;5432;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;do&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;waiting;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;2;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;done;'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This manifest provisions a &lt;code&gt;two-replica&lt;/code&gt; Spring Boot application deployment in the &lt;code&gt;backend&lt;/code&gt; tier. It uses the &lt;code&gt;app-sa&lt;/code&gt; service account, loads runtime configuration and credentials from the ConfigMap and Secret, and connects to the PostgreSQL database via the internal DNS endpoint &lt;code&gt;postgres.demo.svc.cluster.local:5432&lt;/code&gt;. The deployment includes an init container &lt;code&gt;wait-for-database&lt;/code&gt; to ensure the database is reachable before application startup. It exposes the application through a ClusterIP Service named &lt;code&gt;api-svc&lt;/code&gt; on port &lt;code&gt;80&lt;/code&gt;. The deployment is set up to run only on &lt;code&gt;backend&lt;/code&gt; nodes and to spread its pods evenly across different nodes for better reliability and load balance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;50-services.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;database-nodeport&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pg&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5432&lt;/span&gt;         
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5432&lt;/span&gt;    
      &lt;span class="na"&gt;nodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30000&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;webapp-ingress&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ingressClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app.demo.local&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
            &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
            &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api-svc&lt;/span&gt;
                &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The manifest enables both the application and the database to be accessed from inside and outside the cluster. It defines a NodePort Service named &lt;code&gt;database-nodeport&lt;/code&gt; that exposes the PostgreSQL database on port &lt;code&gt;30000&lt;/code&gt; for external access, typically used for development and debugging. It also creates an Ingress resource named &lt;code&gt;webapp-ingress&lt;/code&gt; that routes web traffic for &lt;code&gt;app.demo.local&lt;/code&gt; to the internal &lt;code&gt;api-svc&lt;/code&gt; service, which runs the Spring Boot application on port 80.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 5: Review Application Design
&lt;/h2&gt;

&lt;p&gt;To implement a simple banking system, two tables were designed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;accounts&lt;/code&gt; — stores core account information (owner, currency, balance, etc.).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ledger_entries&lt;/code&gt; — records all debit/credit transactions linked to each account for auditing and reconciliation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This schema ensures data integrity, supports concurrent balance updates via versioning, and provides immutable transaction history.&lt;/p&gt;

&lt;h3&gt;
  
  
  Database Table Layout:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;------------------------------------------------------------+&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt;                         &lt;span class="n"&gt;accounts&lt;/span&gt;                           &lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;------------------------------------------------------------+&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Column&lt;/span&gt;       &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Type&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Constraints&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="k"&gt;Default&lt;/span&gt;      &lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="c1"&gt;--------------|----------------|----------------------------|&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;UUID&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;PK&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;      &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="n"&gt;uuidv7&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;owner_name&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;                   &lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;currency&lt;/span&gt;     &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;CHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;        &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;                   &lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;balance&lt;/span&gt;      &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;NUMERIC&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;version&lt;/span&gt;      &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;BIGINT&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;updated_at&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;TIMESTAMPTZ&lt;/span&gt;    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="n"&gt;NOW&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;     &lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;------------------------------------------------------------+&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;INDEX&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;idx_accounts_owner&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;owner_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;                     &lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;------------------------------------------------------------+&lt;/span&gt;

                  &lt;span class="mi"&gt;1&lt;/span&gt;
     &lt;span class="n"&gt;accounts&lt;/span&gt; &lt;span class="err"&gt;─────────────┐&lt;/span&gt;
                           &lt;span class="err"&gt;│&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fk_ledger_account&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                           &lt;span class="err"&gt;▼&lt;/span&gt;

&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;------------------------------------------------------------+&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt;                     &lt;span class="n"&gt;ledger_entries&lt;/span&gt;                         &lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;------------------------------------------------------------+&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Column&lt;/span&gt;       &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Type&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Constraints&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="k"&gt;Default&lt;/span&gt;      &lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="c1"&gt;--------------|----------------|----------------------------|&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;UUID&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;PK&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;      &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="n"&gt;uuidv7&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;account_id&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;UUID&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;FK&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;      &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;REFERENCES&lt;/span&gt; &lt;span class="n"&gt;accounts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;    &lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;direction&lt;/span&gt;    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;                   &lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;amount&lt;/span&gt;       &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;NUMERIC&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt; &lt;span class="k"&gt;CHECK&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;amount&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt;       &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt;                            &lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;created_at&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;TIMESTAMPTZ&lt;/span&gt;    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="n"&gt;NOW&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;     &lt;span class="o"&gt;|&lt;/span&gt;
&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="c1"&gt;------------------------------------------------------------+&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  SpringBoot:
&lt;/h3&gt;

&lt;p&gt;The Java application provide unified transaction endpoint processes both deposit and withdrawal operations, allowing clients to specify the locking strategy (OPTIMISTIC or PESSIMISTIC) per request.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rest Endpoints&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@RestController&lt;/span&gt;
&lt;span class="nd"&gt;@RequestMapping&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/api/accounts"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;AccountController&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;AccountService&lt;/span&gt; &lt;span class="n"&gt;accountService&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;AccountController&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;AccountService&lt;/span&gt; &lt;span class="n"&gt;accountService&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;accountService&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;accountService&lt;/span&gt;&lt;span class="o"&gt;;}&lt;/span&gt;

    &lt;span class="nd"&gt;@PostMapping&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"/{id}/transaction"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;produces&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MediaType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;APPLICATION_JSON_VALUE&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;ResponseEntity&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;TransactionResponse&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;transaction&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
            &lt;span class="nd"&gt;@PathVariable&lt;/span&gt; &lt;span class="no"&gt;UUID&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="nd"&gt;@Valid&lt;/span&gt; &lt;span class="nd"&gt;@RequestBody&lt;/span&gt; &lt;span class="nc"&gt;TransactionRequest&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;TransactionResponse&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;accountService&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;executeTransaction&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt; &lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;ResponseEntity&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ok&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Rest Example&lt;/strong&gt; POST /api/accounts/3f93c1c2-1c52-4df5-8c6a-9b0c6d7c5c11/transaction&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DEPOSIT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"amount"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"lockingMode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"OPTIMISTIC"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"reason"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"API_DEPOSIT"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;or&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"WITHDRAWAL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"amount"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"lockingMode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"PESSIMISTIC"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"reason"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"API_WITHDRAWAL"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Concurrency Control Strategy in JPA&lt;/strong&gt; The Java application uses JPA (Java Persistence API) — an ORM framework — to interact with a PostgreSQL database while maintaining data integrity during concurrent transactions. It also explores two different locking strategies, described below, to demonstrate how JPA handles concurrency in real-world scenarios.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optimistic Locking Strategy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;@Version&lt;/code&gt; field provides optimistic concurrency control — each update automatically increments the version. When two transactions modify the same Account, the second commit detects a version mismatch and throws an OptimisticLockException, preventing lost updates without requiring database locks. A retry strategy with controlled backoff (&lt;code&gt;5&lt;/code&gt; attempts) can be applied to gracefully handle these transient conflicts.&lt;/p&gt;

&lt;p&gt;Entity&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Data&lt;/span&gt;
&lt;span class="nd"&gt;@Entity&lt;/span&gt;
&lt;span class="nd"&gt;@Table&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"accounts"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;schema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"app"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Account&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

  &lt;span class="nd"&gt;@Id&lt;/span&gt; &lt;span class="nd"&gt;@UuidGenerator&lt;/span&gt;
  &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="no"&gt;UUID&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

  &lt;span class="nd"&gt;@Column&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"owner_name"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;nullable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;ownerName&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

  &lt;span class="nd"&gt;@Column&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;length&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;nullable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;currency&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

  &lt;span class="nd"&gt;@Column&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;nullable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;precision&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;scale&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;BigDecimal&lt;/span&gt; &lt;span class="n"&gt;balance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BigDecimal&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ZERO&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

  &lt;span class="nd"&gt;@Version&lt;/span&gt;
  &lt;span class="nd"&gt;@Column&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;nullable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;long&lt;/span&gt; &lt;span class="n"&gt;version&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

  &lt;span class="nd"&gt;@Column&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"updated_at"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;columnDefinition&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"timestamptz"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;nullable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;Instant&lt;/span&gt; &lt;span class="n"&gt;updatedAt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Instant&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;now&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Transactional&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;isolation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Isolation&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;READ_COMMITTED&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;rollbackFor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Exception&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nd"&gt;@Override&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;TransactionResponse&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="no"&gt;UUID&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;TransactionType&lt;/span&gt; &lt;span class="n"&gt;type&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;BigDecimal&lt;/span&gt; &lt;span class="n"&gt;amt&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;account&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;accountRepo&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;findById&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;orElseThrow&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TransactionType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;DEPOSIT&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;equals&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;type&lt;/span&gt;&lt;span class="o"&gt;))&lt;/span&gt; 
      &lt;span class="n"&gt;account&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;deposit&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;amt&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt; 
    &lt;span class="k"&gt;else&lt;/span&gt; 
      &lt;span class="n"&gt;account&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;withdraw&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;amt&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;ledgerEntry&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ledgerRepo&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;save&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LedgerEntry&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;of&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;account&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;type&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;amt&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;TransactionResponse&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;success&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;account&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ledgerEntry&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Pessimistic Locking Strategy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;@Lock(LockModeType.PESSIMISTIC_WRITE)&lt;/code&gt; annotation enforces pessimistic locking by issuing a &lt;code&gt;database-level SELECT ... FOR UPDATE query&lt;/code&gt;. This explicitly locks the selected Account row until the current transaction completes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;AccountRepository&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;JpaRepository&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Account&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="no"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

  &lt;span class="c1"&gt;// Pessimistic row lock (SELECT ... FOR UPDATE)&lt;/span&gt;
  &lt;span class="nd"&gt;@Lock&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LockModeType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;PESSIMISTIC_WRITE&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="nd"&gt;@Query&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"select a from Account a where a.id = :id"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="nc"&gt;Optional&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Account&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;findForUpdate&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nd"&gt;@Param&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="no"&gt;UUID&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 6: Access the App
&lt;/h2&gt;

&lt;p&gt;Get the Ingress IP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get ingress &lt;span class="nt"&gt;-n&lt;/span&gt; demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME             CLASS   HOSTS            ADDRESS          PORTS   AGE
webapp-ingress   nginx   app.demo.local   192.168.56.240   80      21h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then test endpoints from your host:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Host: app.demo.local"&lt;/span&gt;  http://192.168.56.240/actuator/health
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 7: Testing with k6
&lt;/h2&gt;

&lt;p&gt;In this step, the environment is fully prepared to simulate concurrent bank account transactions and observe how the locking mechanisms behave under load.&lt;/p&gt;

&lt;p&gt;You can use the built-in k6 test script to run the simulation. For each test run, you’ll choose an account and a specific locking strategy (for example, optimistic or pessimistic locking). The script then launches &lt;code&gt;50&lt;/code&gt; virtual users (VUs) running concurrently, using the shared-iterations executor — a total of &lt;code&gt;100&lt;/code&gt; iterations distributed across all VUs. This setup effectively mimics concurrent access to the same account, allowing you to verify how data integrity is preserved during simultaneous transactions.&lt;/p&gt;

&lt;p&gt;To get started, install k6 on your host system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Debian/Ubuntu&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;-k&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--no-default-keyring&lt;/span&gt; &lt;span class="nt"&gt;--keyring&lt;/span&gt; /usr/share/keyrings/k6-archive-keyring.gpg &lt;span class="nt"&gt;--keyserver&lt;/span&gt; hkp://keyserver.ubuntu.com:80 &lt;span class="nt"&gt;--recv-keys&lt;/span&gt; C5AD17C747E3415A3642D57D77C6C491D6AC1D69
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/k6.list
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;k6
&lt;span class="c"&gt;# Windows&lt;/span&gt;
choco &lt;span class="nb"&gt;install &lt;/span&gt;k6
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the load test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# host&lt;/span&gt;
k6 run &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;BASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;http://192.168.56.240&lt;span class="sb"&gt;`&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;ACCOUNT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;3f93c1c2-1c52-4df5-8c6a-9b0c6d7c5c11&lt;span class="sb"&gt;`&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;MODE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;OPTIMISTIC&lt;span class="sb"&gt;`&lt;/span&gt; &lt;span class="nb"&gt;test&lt;/span&gt;/k6-load-test.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will get the output like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
         &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;      &lt;span class="nx"&gt;Grafana&lt;/span&gt;   &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="err"&gt;‾‾&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;  
    &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="se"&gt;\ &lt;/span&gt;&lt;span class="sr"&gt; /&lt;/span&gt;  &lt;span class="err"&gt;\&lt;/span&gt;     &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;  &lt;span class="nx"&gt;__&lt;/span&gt;   &lt;span class="o"&gt;/&lt;/span&gt;  &lt;span class="sr"&gt;/  &lt;/span&gt;&lt;span class="err"&gt; 
&lt;/span&gt;   &lt;span class="o"&gt;/&lt;/span&gt;  &lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;    &lt;span class="err"&gt;\&lt;/span&gt;    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="sr"&gt;/ /&lt;/span&gt;  &lt;span class="o"&gt;/&lt;/span&gt;   &lt;span class="err"&gt;‾‾\&lt;/span&gt; 
  &lt;span class="o"&gt;/&lt;/span&gt;          &lt;span class="err"&gt;\&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt;   &lt;span class="p"&gt;(&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="err"&gt;‾&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt;
 &lt;span class="sr"&gt;/ __________ &lt;/span&gt;&lt;span class="se"&gt;\ &lt;/span&gt;&lt;span class="sr"&gt; |_|&lt;/span&gt;&lt;span class="se"&gt;\_\ &lt;/span&gt;&lt;span class="sr"&gt; &lt;/span&gt;&lt;span class="se"&gt;\_&lt;/span&gt;&lt;span class="sr"&gt;____/&lt;/span&gt; 

     &lt;span class="nx"&gt;execution&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;local&lt;/span&gt;
        &lt;span class="nx"&gt;script&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;k6&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;load&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;js&lt;/span&gt;
        &lt;span class="nx"&gt;output&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;

     &lt;span class="nx"&gt;scenarios&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;100.00&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="nx"&gt;scenario&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt; &lt;span class="nx"&gt;max&lt;/span&gt; &lt;span class="nx"&gt;VUs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="nx"&gt;m30s&lt;/span&gt; &lt;span class="nx"&gt;max&lt;/span&gt; &lt;span class="nf"&gt;duration &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;incl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;graceful&lt;/span&gt; &lt;span class="nx"&gt;stop&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
              &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;concurrent_load&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="nx"&gt;iterations&lt;/span&gt; &lt;span class="nx"&gt;shared&lt;/span&gt; &lt;span class="nx"&gt;among&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt; &lt;span class="nc"&gt;VUs &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;maxDuration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="nx"&gt;m0s&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;gracefulStop&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="err"&gt;━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━&lt;/span&gt;  &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="err"&gt;▶&lt;/span&gt; &lt;span class="nx"&gt;K6&lt;/span&gt; &lt;span class="nx"&gt;Load&lt;/span&gt; &lt;span class="nx"&gt;Test&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;AccountController&lt;/span&gt;          &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="err"&gt;━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━&lt;/span&gt;  &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="err"&gt;▶&lt;/span&gt; &lt;span class="nx"&gt;Target&lt;/span&gt; &lt;span class="nx"&gt;base&lt;/span&gt; &lt;span class="nx"&gt;URL&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//192.168.56.240:8080     source=console&lt;/span&gt;
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="err"&gt;▶&lt;/span&gt; &lt;span class="nx"&gt;Account&lt;/span&gt; &lt;span class="nx"&gt;ID&lt;/span&gt;      &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="nx"&gt;f93c1c2&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;c52&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="nx"&gt;df5&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="nx"&gt;c6a&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="nx"&gt;b0c6d7c5c11&lt;/span&gt;  &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="err"&gt;▶&lt;/span&gt; &lt;span class="nx"&gt;Locking&lt;/span&gt; &lt;span class="nx"&gt;Mode&lt;/span&gt;    &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;OPTIMISTIC&lt;/span&gt;                &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="err"&gt;━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━&lt;/span&gt;  &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;                                               &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="err"&gt;📊&lt;/span&gt; &lt;span class="nx"&gt;Initial&lt;/span&gt; &lt;span class="nx"&gt;Account&lt;/span&gt; &lt;span class="nx"&gt;State&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;                      &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;    &lt;span class="nx"&gt;Balance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt; &lt;span class="nx"&gt;USD&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;Version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;494&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;Owner&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Alice&lt;/span&gt;  &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="err"&gt;━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━&lt;/span&gt;  &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;                                               &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;
&lt;span class="nx"&gt;ERRO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;ERROR&lt;/span&gt; &lt;span class="mi"&gt;422&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="nx"&gt;WITHDRAWAL&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="nx"&gt;failed&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Business&lt;/span&gt; &lt;span class="nx"&gt;rule&lt;/span&gt; &lt;span class="nx"&gt;violation&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;insufficient&lt;/span&gt; &lt;span class="nx"&gt;funds&lt;/span&gt;  &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2025&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;02&lt;/span&gt;&lt;span class="nx"&gt;T13&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;44.705&lt;/span&gt;&lt;span class="nx"&gt;Z&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="nx"&gt;TX&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;a52d4700&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;e63a&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;463&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;808&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;66&lt;/span&gt;&lt;span class="nx"&gt;bf4132ba26&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;DEPOSIT&lt;/span&gt; &lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="nx"&gt;USD&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;Balance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;34&lt;/span&gt; &lt;span class="nc"&gt;USD &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;v494&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;                                                                                                                                               
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2025&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;02&lt;/span&gt;&lt;span class="nx"&gt;T13&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;44.709&lt;/span&gt;&lt;span class="nx"&gt;Z&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="nx"&gt;TX&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="nx"&gt;d001c33&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;c64a&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;44&lt;/span&gt;&lt;span class="nx"&gt;eb&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;b39f&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;93&lt;/span&gt;&lt;span class="nx"&gt;ed854c6e02&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;DEPOSIT&lt;/span&gt; &lt;span class="mi"&gt;46&lt;/span&gt; &lt;span class="nx"&gt;USD&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;Balance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;159&lt;/span&gt; &lt;span class="nc"&gt;USD &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;v496&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2025&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;02&lt;/span&gt;&lt;span class="nx"&gt;T13&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;44.709&lt;/span&gt;&lt;span class="nx"&gt;Z&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="nx"&gt;TX&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;ba848747&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;f5ea&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="nx"&gt;ebf&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;b1b2&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="nx"&gt;c501c44cd2b&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;DEPOSIT&lt;/span&gt; &lt;span class="mi"&gt;79&lt;/span&gt; &lt;span class="nx"&gt;USD&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;Balance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;113&lt;/span&gt; &lt;span class="nc"&gt;USD &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;v495&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;                                                                                                                                              
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2025&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;02&lt;/span&gt;&lt;span class="nx"&gt;T13&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;44.723&lt;/span&gt;&lt;span class="nx"&gt;Z&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="nx"&gt;TX&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;87&lt;/span&gt;&lt;span class="nx"&gt;d8874d&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;af6c&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;47&lt;/span&gt;&lt;span class="nx"&gt;a5&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;9901&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;823&lt;/span&gt;&lt;span class="nx"&gt;f616e8add&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;DEPOSIT&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt; &lt;span class="nx"&gt;USD&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;Balance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;169&lt;/span&gt; &lt;span class="nc"&gt;USD &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;v497&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="nx"&gt;skip&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0004&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="err"&gt;━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━&lt;/span&gt;  
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0004&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="err"&gt;📊&lt;/span&gt; &lt;span class="nx"&gt;Final&lt;/span&gt; &lt;span class="nx"&gt;Account&lt;/span&gt; &lt;span class="nx"&gt;State&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;                        &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0004&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;    &lt;span class="nx"&gt;Balance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;501&lt;/span&gt; &lt;span class="nx"&gt;USD&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;Version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;582&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;Owner&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Alice&lt;/span&gt;                                                                                                                                                                                                              
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0004&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="err"&gt;📈&lt;/span&gt; &lt;span class="nx"&gt;Changes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;                                    &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;                                                                                                                                                                                                              
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0004&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;    &lt;span class="nx"&gt;Balance&lt;/span&gt; &lt;span class="nx"&gt;Change&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;491&lt;/span&gt; &lt;span class="nx"&gt;USD&lt;/span&gt;                   &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;                                                                                                                                                                                                               
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0004&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;    &lt;span class="nx"&gt;Version&lt;/span&gt; &lt;span class="nx"&gt;Change&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;88&lt;/span&gt;                        
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0004&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="err"&gt;━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━&lt;/span&gt;  &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;                                                                                                                                                                                         
&lt;span class="nx"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0004&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="err"&gt;✅&lt;/span&gt; &lt;span class="nx"&gt;Test&lt;/span&gt; &lt;span class="nx"&gt;completed&lt;/span&gt; &lt;span class="nx"&gt;successfully&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;                &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;                                                                                                                                                                                                              


  &lt;span class="err"&gt;█&lt;/span&gt; &lt;span class="nx"&gt;THRESHOLDS&lt;/span&gt;

    &lt;span class="nx"&gt;http_req_duration&lt;/span&gt;
    &lt;span class="err"&gt;✓&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;p(95)&amp;lt;2000&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="nf"&gt;p&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;95&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;1.02&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;

    &lt;span class="nx"&gt;http_req_failed&lt;/span&gt;
    &lt;span class="err"&gt;✗&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;rate&amp;lt;0.1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="nx"&gt;rate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;11.76&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;

    &lt;span class="nx"&gt;version_conflicts&lt;/span&gt;
    &lt;span class="err"&gt;✓&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;rate&amp;lt;0.3&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="nx"&gt;rate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.00&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;


  &lt;span class="err"&gt;█&lt;/span&gt; &lt;span class="nx"&gt;TOTAL&lt;/span&gt; &lt;span class="nx"&gt;RESULTS&lt;/span&gt;

    &lt;span class="nx"&gt;checks_total&lt;/span&gt;&lt;span class="p"&gt;.......:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;    &lt;span class="mf"&gt;74.544329&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;
    &lt;span class="nx"&gt;checks_succeeded&lt;/span&gt;&lt;span class="p"&gt;...:&lt;/span&gt; &lt;span class="mf"&gt;88.00&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;264&lt;/span&gt; &lt;span class="nx"&gt;out&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;
    &lt;span class="nx"&gt;checks_failed&lt;/span&gt;&lt;span class="p"&gt;......:&lt;/span&gt; &lt;span class="mf"&gt;12.00&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;36&lt;/span&gt; &lt;span class="nx"&gt;out&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;

    &lt;span class="err"&gt;✗&lt;/span&gt; &lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="nx"&gt;is&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;
      &lt;span class="err"&gt;↳&lt;/span&gt;  &lt;span class="mi"&gt;88&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="err"&gt;—&lt;/span&gt; &lt;span class="err"&gt;✓&lt;/span&gt; &lt;span class="mi"&gt;88&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="err"&gt;✗&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt;
    &lt;span class="err"&gt;✗&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="nx"&gt;has&lt;/span&gt; &lt;span class="nx"&gt;account&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;
      &lt;span class="err"&gt;↳&lt;/span&gt;  &lt;span class="mi"&gt;88&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="err"&gt;—&lt;/span&gt; &lt;span class="err"&gt;✓&lt;/span&gt; &lt;span class="mi"&gt;88&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="err"&gt;✗&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt;
    &lt;span class="err"&gt;✗&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="nx"&gt;has&lt;/span&gt; &lt;span class="nx"&gt;transaction&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;
      &lt;span class="err"&gt;↳&lt;/span&gt;  &lt;span class="mi"&gt;88&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="err"&gt;—&lt;/span&gt; &lt;span class="err"&gt;✓&lt;/span&gt; &lt;span class="mi"&gt;88&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="err"&gt;✗&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt;

    &lt;span class="nx"&gt;CUSTOM&lt;/span&gt;
    &lt;span class="nx"&gt;account_balance&lt;/span&gt;&lt;span class="p"&gt;................:&lt;/span&gt; &lt;span class="nx"&gt;avg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;499.056818&lt;/span&gt; &lt;span class="nx"&gt;min&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;       &lt;span class="nx"&gt;med&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;506&lt;/span&gt;     &lt;span class="nx"&gt;max&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;983&lt;/span&gt;   &lt;span class="nf"&gt;p&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;767.9&lt;/span&gt;    &lt;span class="nf"&gt;p&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;95&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;815.95&lt;/span&gt;
    &lt;span class="nx"&gt;deposits_total&lt;/span&gt;&lt;span class="p"&gt;.................:&lt;/span&gt; &lt;span class="mi"&gt;48&lt;/span&gt;     &lt;span class="mf"&gt;11.927093&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;
    &lt;span class="nx"&gt;other_errors&lt;/span&gt;&lt;span class="p"&gt;...................:&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt;     &lt;span class="mf"&gt;2.981773&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;
    &lt;span class="nx"&gt;version_conflicts&lt;/span&gt;&lt;span class="p"&gt;..............:&lt;/span&gt; &lt;span class="mf"&gt;0.00&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;  &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;out&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;withdraws_total&lt;/span&gt;&lt;span class="p"&gt;................:&lt;/span&gt; &lt;span class="mi"&gt;52&lt;/span&gt;     &lt;span class="mf"&gt;12.921017&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;

    &lt;span class="nx"&gt;HTTP&lt;/span&gt;
    &lt;span class="nx"&gt;http_req_duration&lt;/span&gt;&lt;span class="p"&gt;..............:&lt;/span&gt; &lt;span class="nx"&gt;avg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;308.79&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt;   &lt;span class="nx"&gt;min&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;10.07&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt;  &lt;span class="nx"&gt;med&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;91.88&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt; &lt;span class="nx"&gt;max&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;2.11&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="nf"&gt;p&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;900.17&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt; &lt;span class="nf"&gt;p&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;95&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;1.02&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;expected_response&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;}...:&lt;/span&gt; &lt;span class="nx"&gt;avg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;332.89&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt;   &lt;span class="nx"&gt;min&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;10.07&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt;  &lt;span class="nx"&gt;med&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;99.69&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt; &lt;span class="nx"&gt;max&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;2.11&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="nf"&gt;p&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;932.24&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt; &lt;span class="nf"&gt;p&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;95&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;1.24&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;
    &lt;span class="nx"&gt;http_req_failed&lt;/span&gt;&lt;span class="p"&gt;................:&lt;/span&gt; &lt;span class="mf"&gt;11.76&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt; &lt;span class="nx"&gt;out&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="mi"&gt;102&lt;/span&gt;
    &lt;span class="nx"&gt;http_reqs&lt;/span&gt;&lt;span class="p"&gt;......................:&lt;/span&gt; &lt;span class="mi"&gt;102&lt;/span&gt;    &lt;span class="mf"&gt;25.345072&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;

    &lt;span class="nx"&gt;EXECUTION&lt;/span&gt;
    &lt;span class="nx"&gt;iteration_duration&lt;/span&gt;&lt;span class="p"&gt;.............:&lt;/span&gt; &lt;span class="nx"&gt;avg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;1.5&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;       &lt;span class="nx"&gt;min&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;578.54&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt; &lt;span class="nx"&gt;med&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;1.48&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;   &lt;span class="nx"&gt;max&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;3.79&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="nf"&gt;p&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;2.12&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;    &lt;span class="nf"&gt;p&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;95&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;2.51&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;
    &lt;span class="nx"&gt;iterations&lt;/span&gt;&lt;span class="p"&gt;.....................:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;    &lt;span class="mf"&gt;24.84811&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;
    &lt;span class="nx"&gt;vus&lt;/span&gt;&lt;span class="p"&gt;............................:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;      &lt;span class="nx"&gt;min&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;         &lt;span class="nx"&gt;max&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;
    &lt;span class="nx"&gt;vus_max&lt;/span&gt;&lt;span class="p"&gt;........................:&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;     &lt;span class="nx"&gt;min&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;        &lt;span class="nx"&gt;max&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;

    &lt;span class="nx"&gt;NETWORK&lt;/span&gt;
    &lt;span class="nx"&gt;data_received&lt;/span&gt;&lt;span class="p"&gt;..................:&lt;/span&gt; &lt;span class="mi"&gt;94&lt;/span&gt; &lt;span class="nx"&gt;kB&lt;/span&gt;  &lt;span class="mi"&gt;23&lt;/span&gt; &lt;span class="nx"&gt;kB&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;
    &lt;span class="nx"&gt;data_sent&lt;/span&gt;&lt;span class="p"&gt;......................:&lt;/span&gt; &lt;span class="mi"&gt;28&lt;/span&gt; &lt;span class="nx"&gt;kB&lt;/span&gt;  &lt;span class="mf"&gt;6.9&lt;/span&gt; &lt;span class="nx"&gt;kB&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;

&lt;span class="nf"&gt;running &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="nx"&gt;m04&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt; &lt;span class="nx"&gt;VUs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;interrupted&lt;/span&gt; &lt;span class="nx"&gt;iterations&lt;/span&gt;                                                                                                                                                                                                               
&lt;span class="nx"&gt;concurrent_load&lt;/span&gt; &lt;span class="err"&gt;✓&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;======================================&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt; &lt;span class="nx"&gt;VUs&lt;/span&gt;  &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="nx"&gt;m04&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="nx"&gt;m0s&lt;/span&gt;  &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="nx"&gt;shared&lt;/span&gt; &lt;span class="nx"&gt;iters&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 8: Monitoring Kubernetes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Using k9s
&lt;/h3&gt;

&lt;p&gt;k9s is a terminal-based UI for Kubernetes. Instead of typing dozens of kubectl commands, you get a fast, interactive dashboard right inside your terminal — perfect for developers, DevOps engineers, and operators who live in the CLI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;k9s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Useful views:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;:node&lt;/code&gt; - View all nodes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;:pod&lt;/code&gt; - View all pods&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;:deployment&lt;/code&gt; - View deployments&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;:service&lt;/code&gt; - View services&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;:ingress&lt;/code&gt; - View ingress rules&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;:pv&lt;/code&gt; - View persistent volumes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;:pvc&lt;/code&gt; - View persistent volume claims&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;:event&lt;/code&gt; - View cluster events&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 9: Cleanup
&lt;/h2&gt;

&lt;p&gt;Stop all VMs but keep state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vagrant halt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Destroy everything (full reset):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vagrant destroy &lt;span class="nt"&gt;-f&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Infrastructure as Code made simple&lt;/strong&gt; — Spin up a complete multi-node Kubernetes cluster with one vagrant up. No cloud required.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Realistic local lab&lt;/strong&gt; — Simulate a production-like environment with control plane, workers, networking, storage, and ingress — all from your laptop.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Application + Infrastructure synergy&lt;/strong&gt; — Deploy a real Spring Boot + PostgreSQL system to understand how app logic and cluster behavior interact under load.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data consistency in action&lt;/strong&gt; — Experiment hands-on with JPA’s optimistic and pessimistic locking strategies to see how concurrency control works in practice.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance validation&lt;/strong&gt; — Use k6 to generate concurrent transactions and validate system reliability through real metrics and stress tests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Full observability from the CLI&lt;/strong&gt; — With k9s, monitor nodes, pods, and resources interactively — no GUI required.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reproducibility and cleanup&lt;/strong&gt; — Destroy and rebuild your environment anytime with vagrant destroy -f, ensuring consistent test conditions for every run.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We’ve built more than just a demo — we’ve created a fully automated multi-node Kubernetes lab that runs a real Spring Boot + PostgreSQL banking system with live networking, storage, and load testing. From Vagrant provisioning to JPA locking strategies and k6 concurrency simulations, every layer demonstrates how consistency and automation come together in modern systems.&lt;/p&gt;

&lt;p&gt;This setup isn’t about production readiness — it’s about understanding. You now have a reproducible playground to experiment with distributed transactions, concurrency control, and cluster operations — all on your own machine. It’s a hands-on way to learn how reliability and scalability emerge when software, data, and infrastructure align.&lt;/p&gt;




&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://developer.hashicorp.com/vagrant/docs" rel="noopener noreferrer"&gt;Vagrant Docs&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://kubernetes.io/docs/home/" rel="noopener noreferrer"&gt;Kubernetes Docs&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://k9scli.io/" rel="noopener noreferrer"&gt;k9s&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://k6.io/docs/" rel="noopener noreferrer"&gt;k6 Load Testing&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/arata-x/vagrant-k8s-bank-demo.git" rel="noopener noreferrer"&gt;&lt;em&gt;Demo Project&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;blockquote&gt;
&lt;p&gt;🧡 &lt;em&gt;“Build it. Break it. Rebuild it — that’s how real engineering insight is forged.”&lt;/em&gt;&lt;br&gt;&lt;br&gt;
— ArataX&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>kubernetes</category>
      <category>vagrant</category>
      <category>springboot</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Mastering Kafka: Concept, Architecture, and Deployment</title>
      <dc:creator>Arata</dc:creator>
      <pubDate>Tue, 23 Sep 2025 13:03:09 +0000</pubDate>
      <link>https://forem.com/aratax/mastering-kafka-concept-architecture-design-and-deployment-4pm9</link>
      <guid>https://forem.com/aratax/mastering-kafka-concept-architecture-design-and-deployment-4pm9</guid>
      <description>&lt;h2&gt;
  
  
  Preface
&lt;/h2&gt;

&lt;p&gt;Before diving into this deep-dive, I encourage you first to read the article &lt;strong&gt;“&lt;a href="https://dev.to/aratax/kafka-made-simple-a-hands-on-quickstart-with-docker-and-spring-boot-180i"&gt;Kafka Made Simple: A Hands-On Quickstart with Docker and Spring Boot&lt;/a&gt;”&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
That piece serves as a practical gateway into the Kafka ecosystem, helping you set up a local cluster, publish your first events, and see how Kafka fits into a real Spring Boot project.  &lt;/p&gt;

&lt;p&gt;This article builds on that foundation. Instead of focusing only on the &lt;em&gt;how&lt;/em&gt;, here we unpack the &lt;em&gt;why&lt;/em&gt; and the &lt;em&gt;what&lt;/em&gt;:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;concepts&lt;/strong&gt; that make Kafka more than just a messaging system.
&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;architecture&lt;/strong&gt; that ensures durability, scalability, and fault tolerance.
&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;design principles&lt;/strong&gt; behind Kafka’s performance.
&lt;/li&gt;
&lt;li&gt;A systematic &lt;strong&gt;deep dive&lt;/strong&gt; into partitions, logs, replication, producers, consumers, transactions, and rebalancing.
&lt;/li&gt;
&lt;li&gt;Practical &lt;strong&gt;deployment insights&lt;/strong&gt; and configuration guidance.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 Think of this as the &lt;strong&gt;conceptual companion&lt;/strong&gt; to your hands-on quickstart—helping you see the big picture, design production-ready systems, and apply Kafka confidently in real-world projects.&lt;/p&gt;
&lt;h2&gt;
  
  
  Outline
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;1. Core Design Principles&lt;/li&gt;
&lt;li&gt;2. Partitions&lt;/li&gt;
&lt;li&gt;3. Log&lt;/li&gt;
&lt;li&gt;4. Key and Log Compaction&lt;/li&gt;
&lt;li&gt;5. Replication&lt;/li&gt;
&lt;li&gt;6. Controller&lt;/li&gt;
&lt;li&gt;7. Producer&lt;/li&gt;
&lt;li&gt;8. Consumer&lt;/li&gt;
&lt;li&gt;9. Offset Tracking&lt;/li&gt;
&lt;li&gt;10. Rebalance&lt;/li&gt;
&lt;li&gt;11. Exactly-Once and Transactions&lt;/li&gt;
&lt;li&gt;12. Deployment&lt;/li&gt;
&lt;li&gt;13. Key Takeaways&lt;/li&gt;
&lt;li&gt;14. Conclusion&lt;/li&gt;
&lt;li&gt;Appendix: Demo Project&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  1. Core Design Principles
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Distributed and Scalable Architecture
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kafka runs as a cluster of brokers, enabling horizontal   scalability.&lt;/li&gt;
&lt;li&gt;Topics are partitioned across brokers to support parallelism and high throughput.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Immutable, Append-Only Log
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Each partition is a structured commit log with sequential message appends.&lt;/li&gt;
&lt;li&gt;Simplifies replication, recovery, and stream processing.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Decoupled Producers and Consumers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kafka uses a publish-subscribe model with loose coupling.&lt;/li&gt;
&lt;li&gt;Consumers read independently without affecting producers.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Message Durability and Fault Tolerance
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Messages are persisted to disk and replicated across brokers.&lt;/li&gt;
&lt;li&gt;Leader-follower replication ensures durability during broker failures.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  High Throughput and Low Latency
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kafka handles millions of messages per second with minimal latency.&lt;/li&gt;
&lt;li&gt;Batching, compression, and efficient I/O optimize performance.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Stream-Oriented Processing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kafka Streams and integrations (e.g., Flink, Spark) support real-time processing.&lt;/li&gt;
&lt;li&gt;Enables event-driven architectures and stateful computations.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Consumer-Controlled Offset Management
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Consumers manage their own offsets for replayability and fault recovery.&lt;/li&gt;
&lt;li&gt;Supports exactly-once or at-least-once semantics based on configuration.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Pluggable and Extensible APIs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kafka provides Producer, Consumer, Streams, and Connect APIs.&lt;/li&gt;
&lt;li&gt;Kafka Connect simplifies integration with external systems like databases and Hadoop.&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  2. Partitions
&lt;/h2&gt;

&lt;p&gt;Partitions are fundamental to Kafka’s ability to scale horizontally and maintain high availability across distributed systems. &lt;br&gt;
Each topic is split into one or more partitions, which serve as independent, ordered logs.&lt;/p&gt;
&lt;h3&gt;
  
  
  What is a Partition?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;An &lt;strong&gt;ordered, immutable log&lt;/strong&gt; of records.
&lt;/li&gt;
&lt;li&gt;Each record has a unique &lt;strong&gt;offset&lt;/strong&gt; (like a line number).
&lt;/li&gt;
&lt;li&gt;Ordering is &lt;strong&gt;guaranteed within a partition&lt;/strong&gt;, but not across partitions.
&lt;/li&gt;
&lt;li&gt;Producers append sequentially, consumers read sequentially.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Think of a partition as a “mini-log” that can be processed independently.&lt;/p&gt;
&lt;h3&gt;
  
  
  Partitioning Strategy
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Round-robin&lt;/strong&gt; → default if no key is provided; balances evenly.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key-based hashing&lt;/strong&gt; → same key always maps to the same partition; ensures per-key ordering.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom partitioner&lt;/strong&gt; → user-supplied logic for specialized routing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Use a meaningful key (e.g., customer ID) for predictable ordering.&lt;/p&gt;
&lt;h3&gt;
  
  
  Ordering Guarantees
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Records with the same key always land in the same partition.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Per-key ordering is guaranteed.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Global ordering across partitions is &lt;strong&gt;not provided&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⚠️ If you need total ordering, use a &lt;strong&gt;single partition&lt;/strong&gt; (but this limits throughput).&lt;/p&gt;
&lt;h3&gt;
  
  
  Parallelism &amp;amp; Consumer Scaling
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;One consumer in a group reads from one or more partitions.
&lt;/li&gt;
&lt;li&gt;More partitions → more consumers can share the workload.
&lt;/li&gt;
&lt;li&gt;This enables Kafka to scale horizontally with &lt;strong&gt;consumer groups&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Match partition count to expected parallelism (e.g., number of consumer instances).&lt;/p&gt;
&lt;h3&gt;
  
  
  Trade-offs
&lt;/h3&gt;

&lt;p&gt;Adding partitions boosts throughput and enables horizontal scaling, but also increases metadata, file handles, and controller load—balance performance with operational overhead.&lt;/p&gt;

&lt;p&gt;⚠️ Too many partitions per broker can hurt stability (common pitfall in large clusters).&lt;/p&gt;
&lt;h3&gt;
  
  
  Partition Reassignment &amp;amp; Expansion
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kafka supports &lt;strong&gt;rebalancing partitions&lt;/strong&gt; across brokers for load balancing.
&lt;/li&gt;
&lt;li&gt;Adding partitions later increases capacity but may &lt;strong&gt;break key ordering&lt;/strong&gt; (keys may re-hash to new partitions).
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Plan partition counts in advance. Increase only when unavoidable.&lt;/p&gt;
&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Partitions = &lt;strong&gt;scaling + ordering + parallelism&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;They allow Kafka to distribute work across consumers and brokers.
&lt;/li&gt;
&lt;li&gt;The number of partitions directly impacts &lt;strong&gt;performance, cost, and design trade-offs&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Pick partition counts carefully: balance &lt;strong&gt;parallelism vs overhead&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  3. Log
&lt;/h2&gt;

&lt;p&gt;At the core of Kafka is the &lt;strong&gt;log&lt;/strong&gt; — an append-only data structure where each topic-partition maintains a sequential list of records. The log underpins durability, ordering, and replayability in Kafka.  &lt;/p&gt;
&lt;h3&gt;
  
  
  Log Fundamentals
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Append-only&lt;/strong&gt;: Producers write new records only at the end.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sequential reads&lt;/strong&gt;: Consumers read messages by offset in order.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Immutability&lt;/strong&gt;: Records are never modified once written.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ordering&lt;/strong&gt;: Within a partition, offsets guarantee strict ordering.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Durability&lt;/strong&gt;: Backed by disk with efficient sequential writes and OS page cache.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Simplifies recovery and replay by ensuring deterministic ordering.&lt;br&gt;
⚠️ Updates or deletes are handled via &lt;strong&gt;compaction&lt;/strong&gt; or &lt;strong&gt;tombstones&lt;/strong&gt;, not in-place mutation.  &lt;/p&gt;
&lt;h3&gt;
  
  
  Partition as a Folder
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Each partition maps to a &lt;strong&gt;directory&lt;/strong&gt; on disk (e.g., &lt;code&gt;/var/lib/kafka/volumes/kafka_data/_data/order-0&lt;/code&gt;).
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Keeps partition data isolated for replication and recovery.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inside a Partition Directory&lt;/strong&gt;  &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;File Name&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;*.log&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Stores Kafka records (key-value pairs).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;*.index&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Maps offsets to byte positions in the &lt;code&gt;.log&lt;/code&gt; file.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;*.timeindex&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Maps timestamps to offsets for time-based lookups.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;leader-epoch-checkpoint&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tracks leader epochs for replication consistency.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;partition.metadata&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Stores partition-level configuration or state.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h3&gt;
  
  
  Log Lifecycle
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;As data grows, Kafka rolls logs into &lt;strong&gt;segments&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Each segment has a &lt;code&gt;.log&lt;/code&gt;, &lt;code&gt;.index&lt;/code&gt;, and &lt;code&gt;.timeindex&lt;/code&gt; file.
&lt;/li&gt;
&lt;li&gt;New messages go into the &lt;strong&gt;active segment&lt;/strong&gt; (latest &lt;code&gt;.log&lt;/code&gt;).
&lt;/li&gt;
&lt;li&gt;Old segments can be safely deleted or compacted based on retention rules.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt; (partition &lt;code&gt;order-0&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;00000000000000000000.log        → Log segment storing the actual messages
00000000000000000000.index      → Offset index for fast lookup of records
00000000000000000000.timeindex  → Timestamp index for time-based queries
leader-epoch-checkpoint         → Tracks changes in partition leadership
partition.metadata              → Metadata about the partition configuration
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As more data arrives and the first segment grows beyond the configured segment size, Kafka rolls over to create new segments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;00000000000000000001.log
00000000000000000001.index
00000000000000000001.timeindex
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Retention and Compaction
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kafka does not keep logs forever → policies determine retention.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Retention Policies&lt;/strong&gt;:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Time-based&lt;/strong&gt;: Delete records older than &lt;code&gt;retention.ms&lt;/code&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Size-based&lt;/strong&gt;: Delete when total log size exceeds &lt;code&gt;retention.bytes&lt;/code&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compaction&lt;/strong&gt;: Retain only the latest value per key.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Retention prevents unbounded disk usage.&lt;br&gt;
⚠️ Aggressive retention can delete records needed for replay or lagging consumers.  &lt;/p&gt;
&lt;h3&gt;
  
  
  Performance Considerations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Segment size&lt;/strong&gt; and retention settings impact disk churn and log cleanup frequency.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disk throughput&lt;/strong&gt; and filesystem tuning (XFS recommended) directly affect performance.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consumer lag&lt;/strong&gt; → large replay windows may require higher retention to allow catch-up.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ SSDs improve latency, but sequential disk writes mean &lt;strong&gt;HDDs can still perform well&lt;/strong&gt;.&lt;br&gt;
⚠️ Misconfigured retention can either exhaust disk or delete needed data too quickly.  &lt;/p&gt;
&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;The Kafka log is:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Append-only&lt;/strong&gt; → simple and efficient for writes.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Segmented&lt;/strong&gt; → scalable and manageable on disk.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retained or compacted&lt;/strong&gt; → supports both replayability and bounded storage.
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Proper tuning of &lt;strong&gt;segment size, retention, and compaction&lt;/strong&gt; ensures Kafka logs remain durable, performant, and aligned with application needs.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  4. Key and Log Compaction
&lt;/h2&gt;

&lt;p&gt;Kafka topics allow multiple messages with the same &lt;strong&gt;key&lt;/strong&gt;, and Kafka provides &lt;strong&gt;log compaction&lt;/strong&gt; to keep only the latest value per key. This design supports stateful stream processing, caching, and event sourcing use cases.  &lt;/p&gt;
&lt;h3&gt;
  
  
  Keys in Kafka
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kafka does not enforce &lt;strong&gt;uniqueness&lt;/strong&gt; of keys.
&lt;/li&gt;
&lt;li&gt;The key determines &lt;strong&gt;partition placement&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Same key → always routed to the same partition.
&lt;/li&gt;
&lt;li&gt;Ensures &lt;strong&gt;per-key ordering&lt;/strong&gt; of events.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Common Use Cases:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Updates to the same entity (e.g., user profile changes).
&lt;/li&gt;
&lt;li&gt;Event streams per entity (e.g., customer actions).
&lt;/li&gt;
&lt;li&gt;Stateful stream processing (aggregates or reducers).
&lt;/li&gt;
&lt;li&gt;Materialized views (latest state per key).
&lt;/li&gt;
&lt;li&gt;Caching or event sourcing (replay per entity).
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⚠️ Keys don’t guarantee global uniqueness — they only ensure ordering within a partition.  &lt;/p&gt;
&lt;h3&gt;
  
  
  Log Compaction
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Log compaction&lt;/strong&gt; removes older records for a given key, retaining only the most recent value.
&lt;/li&gt;
&lt;li&gt;Enabled via &lt;strong&gt;&lt;code&gt;cleanup.policy=compact&lt;/code&gt;&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Benefits:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keeps the latest value per key for &lt;strong&gt;stateful applications&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Reduces disk usage while preserving key-level history.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⚠️ Considerations:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Compaction is &lt;strong&gt;asynchronous&lt;/strong&gt; → old versions may remain temporarily.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Offsets and order are preserved&lt;/strong&gt; even after compaction.
&lt;/li&gt;
&lt;li&gt;Not a replacement for time/size-based retention.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Configurations&lt;/strong&gt;:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;cleanup.policy=compact&lt;/code&gt; → enable compaction.
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;min.cleanable.dirty.ratio&lt;/code&gt; → % of log dirtiness before cleaning triggers.
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;min.compaction.lag.ms&lt;/code&gt; / &lt;code&gt;max.compaction.lag.ms&lt;/code&gt; → control delay before segments are compacted.
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;delete.retention.ms&lt;/code&gt; → how long tombstones are retained.
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Tombstones
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;tombstone&lt;/strong&gt; is a message with a key and a &lt;code&gt;null&lt;/code&gt; value.
&lt;/li&gt;
&lt;li&gt;Signals that all previous values for that key should be deleted during compaction.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"user123"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How Tombstones Work:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Marks the key for deletion → tells Kafka “forget this key.”
&lt;/li&gt;
&lt;li&gt;During compaction, Kafka removes earlier messages with that key.
&lt;/li&gt;
&lt;li&gt;The tombstone itself is later removed after &lt;code&gt;delete.retention.ms&lt;/code&gt;.
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;✅ Enables explicit &lt;strong&gt;deletes&lt;/strong&gt; in a compacted topic.&lt;br&gt;
⚠️ Consumers must be designed to interpret null values correctly.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Keys&lt;/strong&gt; define partitioning and enable ordered per-entity streams.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Log compaction&lt;/strong&gt; ensures only the latest record per key is retained, reducing log size while preserving correctness.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tombstones&lt;/strong&gt; provide a mechanism for deleting keys in compacted topics.
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 keys + compaction allow Kafka to serve as both a durable event log and a state store for real-time applications.  &lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  5. Replication
&lt;/h2&gt;

&lt;p&gt;Replication in Kafka ensures resilience and fault tolerance by distributing partitions across multiple brokers. Each partition has one &lt;strong&gt;leader&lt;/strong&gt; and one or more &lt;strong&gt;followers&lt;/strong&gt; that maintain synchronized copies.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Leader and Followers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Leader&lt;/strong&gt; → handles all reads and writes for the partition.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Followers&lt;/strong&gt; → replicate the leader’s log asynchronously to stay in sync.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Clients always interact with the leader, simplifying producer/consumer logic.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Replication Factor
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Defines the number of copies per partition.
&lt;/li&gt;
&lt;li&gt;Common default: &lt;strong&gt;3 (1 leader, 2 followers)&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Higher replication factor = stronger fault tolerance.&lt;br&gt;
⚠️ Increases storage and network overhead.  &lt;/p&gt;

&lt;h3&gt;
  
  
  In-Sync Replicas (ISR)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;ISRs are replicas fully caught up with the leader.
&lt;/li&gt;
&lt;li&gt;Only ISRs are eligible for promotion during failover.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Ensures safe and consistent recovery.&lt;br&gt;
⚠️ Too many out-of-sync replicas weaken durability guarantees.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Leader Election and Failover
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;If the leader fails, a new one is chosen from the ISR set.
&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Controller&lt;/strong&gt; (see Section 8) coordinates this election.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Enables fast recovery and high availability.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Consistency vs Latency Trade-offs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;acks=all&lt;/code&gt;&lt;/strong&gt; → strongest durability. Leader waits for all ISR acknowledgments.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;acks=1&lt;/code&gt;&lt;/strong&gt; → leader-only acknowledgment. Faster writes, but less durable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⚠️ &lt;strong&gt;More replicas = More safety&lt;/strong&gt;, but also higher cost and latency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;Replication provides:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High availability&lt;/strong&gt; through leader/follower design.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Durability&lt;/strong&gt; via multiple replicas and ISRs.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fault tolerance&lt;/strong&gt; with automatic leader election.
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Balance &lt;strong&gt;safety&lt;/strong&gt; and &lt;strong&gt;performance&lt;/strong&gt; by adjusting replication and acknowledgments.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  6. Controller
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;Kafka Controller&lt;/strong&gt; is a special broker role that manages &lt;strong&gt;cluster-wide metadata and coordination&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
In modern &lt;strong&gt;KRaft mode (Kafka Raft)&lt;/strong&gt;, controllers form a &lt;strong&gt;quorum&lt;/strong&gt; that replaces ZooKeeper, ensuring metadata consistency and high availability.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Metadata Management
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Tracks topics, partitions, broker registrations, and configurations.
&lt;/li&gt;
&lt;li&gt;Persists updates in the internal metadata log &lt;strong&gt;&lt;code&gt;__cluster_metadata&lt;/code&gt;&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Ensures all brokers share a consistent view of the cluster.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Leader Election
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Coordinates &lt;strong&gt;partition leader elections&lt;/strong&gt; when brokers fail or join.
&lt;/li&gt;
&lt;li&gt;Relies on the ISR set maintained by replication (see Section 7).
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Keeps partitions highly available with minimal downtime.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Partition Assignment
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Distributes partitions across brokers for load balancing.
&lt;/li&gt;
&lt;li&gt;Reassigns partitions during rebalances, broker failures, or cluster expansion.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⚠️ Frequent reassignments add overhead; prefer stable membership.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Quorum Coordination (KRaft)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Controllers form a &lt;strong&gt;Raft quorum&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;One acts as the &lt;strong&gt;active leader&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Others are &lt;strong&gt;followers&lt;/strong&gt;, replicating metadata changes.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;✅ Provides fault tolerance without external ZooKeeper.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Cluster Health and Recovery
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Detects broker failures and updates cluster state.
&lt;/li&gt;
&lt;li&gt;Removes failed brokers from the ISR (in coordination with replication).
&lt;/li&gt;
&lt;li&gt;Triggers &lt;strong&gt;leader re-election&lt;/strong&gt; for affected partitions.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Enables rapid self-healing and resilience.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Active vs. Follower Controllers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Active Controller (Leader)&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Makes cluster-wide decisions:
&lt;/li&gt;
&lt;li&gt;Runs leader elections.
&lt;/li&gt;
&lt;li&gt;Updates ISR lists.
&lt;/li&gt;
&lt;li&gt;Tracks broker registrations and failures.
&lt;/li&gt;
&lt;li&gt;Applies config changes (topics, ACLs, quotas).
&lt;/li&gt;
&lt;li&gt;Persists changes in &lt;strong&gt;&lt;code&gt;__cluster_metadata&lt;/code&gt;&lt;/strong&gt;, replicated to followers.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;👉 Functions as the &lt;strong&gt;“cluster brain.”&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Follower Controllers&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Replicate metadata log entries from the active controller.
&lt;/li&gt;
&lt;li&gt;Do not make independent decisions.
&lt;/li&gt;
&lt;li&gt;Stay ready to take over if the active controller fails.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;👉 Serve as &lt;strong&gt;“standby brains.”&lt;/strong&gt;  &lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;The Controller is the &lt;strong&gt;control plane&lt;/strong&gt; of Kafka:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintains &lt;strong&gt;metadata consistency&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Runs &lt;strong&gt;leader elections&lt;/strong&gt; based on ISR information.
&lt;/li&gt;
&lt;li&gt;Coordinates &lt;strong&gt;partition assignment&lt;/strong&gt; and cluster state changes.
&lt;/li&gt;
&lt;li&gt;In KRaft mode, controllers use Raft quorum replication, removing ZooKeeper.
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Together with &lt;strong&gt;Replication (7)&lt;/strong&gt;, the Controller ensures Kafka remains highly available, consistent, and fault-tolerant.  &lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  7. Producer
&lt;/h2&gt;

&lt;p&gt;Producers are responsible for reliable, ordered, and efficient delivery of messages to Kafka topics. Their configuration balances durability, ordering, latency, and resource usage through several key mechanisms.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Durability and Acknowledgments (acks)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Producers control how many broker acknowledgments are required before a send is considered successful.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;acks=0&lt;/code&gt; → fire-and-forget, lowest latency, no durability.
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;acks=1&lt;/code&gt; → leader acknowledgment only, balances latency and durability.
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;acks=all&lt;/code&gt; → requires leader + ISR acknowledgment, strongest durability.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;✅ Use &lt;code&gt;acks=all&lt;/code&gt; for critical data.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Ordering and Retries
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kafka producers retry failed sends automatically.
&lt;/li&gt;
&lt;li&gt;Retries can break ordering if multiple requests are in flight.
&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;max.in.flight.requests.per.connection=1&lt;/code&gt; to strictly preserve order.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Idempotence&lt;/strong&gt; (&lt;code&gt;enable.idempotence=true&lt;/code&gt;) ensures retries don’t produce duplicates.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Combine retries + idempotence to achieve exactly-once semantics.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Batching and Latency Trade-offs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Producers buffer messages into batches before sending.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;batch.size&lt;/code&gt;&lt;/strong&gt; controls max size of a batch in bytes.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;linger.ms&lt;/code&gt;&lt;/strong&gt; sets how long to wait before sending a partially full batch.

&lt;ul&gt;
&lt;li&gt;Larger batches / higher linger → better throughput, higher latency.
&lt;/li&gt;
&lt;li&gt;Smaller batches / lower linger → lower latency, reduced throughput.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;✅ Tune for workload: real-time systems prefer low latency; batch pipelines prefer throughput.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Compression
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Supported codecs: &lt;code&gt;gzip&lt;/code&gt;, &lt;code&gt;snappy&lt;/code&gt;, &lt;code&gt;lz4&lt;/code&gt;, &lt;code&gt;zstd&lt;/code&gt;.
&lt;/li&gt;
&lt;li&gt;Compression applies per batch, saving bandwidth and storage. &lt;/li&gt;
&lt;li&gt;Default is &lt;code&gt;none&lt;/code&gt;. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;gzip&lt;/code&gt; costs higher CPU usage for compression/decompression.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ &lt;code&gt;lz4&lt;/code&gt; or &lt;code&gt;zstd&lt;/code&gt; for good speed/ratio balance.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Resource Limits and Buffering
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;buffer.memory&lt;/code&gt;&lt;/strong&gt;: max memory available for unsent records.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;max.block.ms&lt;/code&gt;&lt;/strong&gt;: how long &lt;code&gt;send()&lt;/code&gt; will block when buffer is full.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;max.request.size&lt;/code&gt;&lt;/strong&gt;: prevents oversized requests.
&lt;/li&gt;
&lt;li&gt;These settings protect the producer and broker from overload.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Monitor producer metrics (buffer exhaustion, errors) to detect bottlenecks.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;Producer tuning is about balancing:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Durability vs. latency&lt;/strong&gt; (&lt;code&gt;acks&lt;/code&gt;).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ordering vs. throughput&lt;/strong&gt; (retries, in-flight requests).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CPU vs. I/O efficiency&lt;/strong&gt; (compression, batching).
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 With correct configuration, producers achieve high throughput without sacrificing reliability.  &lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  8. Consumer
&lt;/h2&gt;

&lt;p&gt;Consumers are responsible for reading messages from topics, tracking their progress, and coordinating with other consumers in a group. Their configuration impacts delivery guarantees, throughput, latency, fault tolerance, and ordering.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Offset Management and Delivery Guarantees
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automatic commits&lt;/strong&gt; (&lt;code&gt;enable.auto.commit=true&lt;/code&gt;) → simple, but only &lt;em&gt;at-least-once&lt;/em&gt; delivery since commits are decoupled from processing.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manual commits&lt;/strong&gt; (&lt;code&gt;commitSync&lt;/code&gt; / &lt;code&gt;commitAsync&lt;/code&gt;) → give precise control to commit only after successful processing.
&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;exactly-once semantics&lt;/strong&gt;, bind offset commits to transactions or use manual synchronous commit.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;auto.offset.reset&lt;/code&gt;&lt;/strong&gt; determines startup behavior if no committed offset exists:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;earliest&lt;/code&gt; → start from the beginning (useful for replays).
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;latest&lt;/code&gt; → only consume new records.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;✅ Use manual commits or transactional commits in critical pipelines.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Partition Assignment and Rebalancing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Within one consumer group, each partition is assigned to &lt;strong&gt;at most one member&lt;/strong&gt; at a time.
&lt;/li&gt;
&lt;li&gt;Multiple consumer groups can read the same partition independently.
&lt;/li&gt;
&lt;li&gt;Assignment strategies:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Range&lt;/strong&gt; → contiguous partition sets.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RoundRobin&lt;/strong&gt; → even distribution across members.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sticky&lt;/strong&gt; → minimizes partition movement during rebalances.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Frequent join/leave events → trigger rebalances and pause consumption.
&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;✅ Keep membership stable to reduce churn.&lt;br&gt;&lt;br&gt;
⚠️ Tune &lt;strong&gt;&lt;code&gt;session.timeout.ms&lt;/code&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;code&gt;heartbeat.interval.ms&lt;/code&gt;&lt;/strong&gt;:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Higher values tolerate long GC pauses or transient work.
&lt;/li&gt;
&lt;li&gt;Lower values detect failures faster but may cause false positives.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Poll and Fetch Tuning
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;max.poll.records&lt;/code&gt;&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Increase for higher throughput.
&lt;/li&gt;
&lt;li&gt;Reduce to limit per-iteration processing and avoid long loops.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;&lt;code&gt;max.partition.fetch.bytes&lt;/code&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;code&gt;fetch.max.wait.ms&lt;/code&gt;&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Larger values → better for bulk processing.
&lt;/li&gt;
&lt;li&gt;Smaller values → better for low-latency use cases.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;&lt;code&gt;fetch.min.bytes&lt;/code&gt;&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Set higher to batch more data (throughput).
&lt;/li&gt;
&lt;li&gt;Set to &lt;code&gt;1&lt;/code&gt; for immediate returns (latency).
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;The poll loop must call &lt;code&gt;poll()&lt;/code&gt; frequently:

&lt;ul&gt;
&lt;li&gt;Long processing requires increasing &lt;strong&gt;&lt;code&gt;max.poll.interval.ms&lt;/code&gt;&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Handle rebalance callbacks to stay responsive.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;✅Balance throughput vs latency depending on workload.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;Consumer tuning balances:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Delivery guarantees vs. simplicity&lt;/strong&gt; (auto vs manual commits).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Partition stability vs. flexibility&lt;/strong&gt; (assignment and rebalance strategies).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Throughput vs. latency&lt;/strong&gt; (poll/fetch tuning).
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡Use manual or transactional commits for critical pipelines, keep consumer group membership stable, and tune poll/fetch settings to balance throughput with latency.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  9. Offset Tracking
&lt;/h2&gt;

&lt;p&gt;An &lt;strong&gt;offset&lt;/strong&gt; is a position marker that tells a consumer &lt;em&gt;which record it has read up to&lt;/em&gt; in a partition, and where to resume on restart or after a failure. Kafka tracks offsets &lt;strong&gt;per partition, per consumer group&lt;/strong&gt;, allowing multiple consumers to share work safely.  &lt;/p&gt;

&lt;h3&gt;
  
  
  How Offset Tracking Works
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Consumer Pull Model&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consumers request data from partitions starting from a specific offset.
&lt;/li&gt;
&lt;li&gt;They control whether to begin from &lt;code&gt;earliest&lt;/code&gt;, &lt;code&gt;latest&lt;/code&gt;, or a committed offset.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Offset Commitment&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consumers save progress by committing offsets, either automatically or manually.
&lt;/li&gt;
&lt;li&gt;Committed offsets are stored in Kafka’s internal topic &lt;strong&gt;&lt;code&gt;__consumer_offsets&lt;/code&gt;&lt;/strong&gt;, which is partitioned and replicated.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;✅ Automatic commits are simple for &lt;em&gt;at-least-once&lt;/em&gt; delivery.&lt;br&gt;
⚠️ Manual commits are safer for critical processing, but require more application logic.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Consumer Position vs. Committed Offset
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Consumer Position&lt;/strong&gt; → the &lt;strong&gt;next&lt;/strong&gt; record the consumer will read (held in memory).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Committed Offset&lt;/strong&gt; → the last offset safely stored as a checkpoint.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[00][01][02][03][04][05][06][07][08][09][10][11]
                                      ^-- committed = 09 (resume here)
                                              ^-- position = 11 (next to read)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 If the consumer crashes, it restarts from the &lt;strong&gt;committed offset&lt;/strong&gt;, not the in-memory position.&lt;br&gt;&lt;br&gt;
This means it may &lt;strong&gt;re-read some records&lt;/strong&gt; but won’t skip any.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Offsets are &lt;strong&gt;per-partition position markers&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Kafka persists committed offsets in the &lt;code&gt;__consumer_offsets&lt;/code&gt; topic.
&lt;/li&gt;
&lt;li&gt;The gap between &lt;strong&gt;position vs. committed offset&lt;/strong&gt; provides fault tolerance, but may cause duplicates.
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Correct offset management is essential for delivery guarantees (&lt;em&gt;at-least-once, at-most-once, exactly-once&lt;/em&gt;).  &lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  10. Rebalance
&lt;/h2&gt;

&lt;p&gt;Rebalancing is the process where Kafka’s &lt;strong&gt;Group Coordinator&lt;/strong&gt; redistributes partitions among consumers in a &lt;strong&gt;consumer group&lt;/strong&gt; whenever the workload relationship changes.  &lt;/p&gt;

&lt;h3&gt;
  
  
  When Rebalancing Happens
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A new consumer joins the group (more parallelism).
&lt;/li&gt;
&lt;li&gt;An existing consumer leaves or fails (load must be reassigned).
&lt;/li&gt;
&lt;li&gt;A topic’s partitions increase (new partitions must be assigned).
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How Rebalancing Works
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Group Coordinator detects a change in group membership.
&lt;/li&gt;
&lt;li&gt;All consumers stop fetching temporarily.
&lt;/li&gt;
&lt;li&gt;Coordinator calculates a new partition assignment.
&lt;/li&gt;
&lt;li&gt;Each consumer receives its updated assignment.
&lt;/li&gt;
&lt;li&gt;Consumers resume reading from their assigned offsets.
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Minimize unnecessary group membership changes and control partition counts carefully to reduce rebalance frequency and consumer downtime.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  11. Exactly Once and Transactions
&lt;/h2&gt;

&lt;p&gt;Kafka’s &lt;strong&gt;Exactly-Once Semantics (EOS)&lt;/strong&gt; ensures that messages are processed &lt;em&gt;once and only once&lt;/em&gt;, even in the face of retries or failures. This combines idempotent production, transactions, and offset commits into a unified model for reliable stream processing.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Idempotent Producer
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;When &lt;strong&gt;&lt;code&gt;enable.idempotence=true&lt;/code&gt;&lt;/strong&gt;, the producer is assigned a &lt;strong&gt;Producer ID (PID)&lt;/strong&gt; and per-partition sequence numbers.
&lt;/li&gt;
&lt;li&gt;Retries are deduplicated at the broker using these sequence numbers.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Guarantees &lt;em&gt;no duplicates&lt;/em&gt; in a single partition, even under retries.&lt;br&gt;
⚠️ Does not guarantee atomicity across multiple partitions or topics by itself.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Transactional Producer
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;transactional producer&lt;/strong&gt; groups multiple writes and offset commits into a single atomic unit.
&lt;/li&gt;
&lt;li&gt;Either all messages + offset commits succeed, or none do.
&lt;/li&gt;
&lt;li&gt;Controlled via a stable &lt;strong&gt;&lt;code&gt;transactional.id&lt;/code&gt;&lt;/strong&gt;, which enables fencing (old producers with the same ID are invalidated).
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Provides atomic &lt;em&gt;read → process → write&lt;/em&gt; semantics.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Transaction Coordinator
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A special broker component that manages transaction state.
&lt;/li&gt;
&lt;li&gt;Persists transaction metadata in the internal topic &lt;strong&gt;&lt;code&gt;__transaction_state&lt;/code&gt;&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Ensures commit/abort decisions are coordinated for each &lt;code&gt;transactional.id&lt;/code&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⚠️ Coordinator bottlenecks can occur if too many producers use transactions with wide scope.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Consumer Isolation Levels
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Consumers control visibility into transactional writes via &lt;strong&gt;&lt;code&gt;isolation.level&lt;/code&gt;&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;read_uncommitted&lt;/code&gt; → sees all records (including aborted transactions).
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;read_committed&lt;/code&gt; → sees only records from successfully committed transactions.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;✅ Use &lt;code&gt;read_committed&lt;/code&gt; in pipelines that require strict correctness.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Offsets in Transactions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;&lt;code&gt;sendOffsetsToTransaction&lt;/code&gt;&lt;/strong&gt; API binds offset commits to producer transactions.
&lt;/li&gt;
&lt;li&gt;Offsets are only committed if the producer transaction itself commits.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Ensures &lt;em&gt;exactly-once&lt;/em&gt; end-to-end semantics: messages are processed and offsets advanced atomically.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Idempotence&lt;/strong&gt; removes duplicates per partition.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transactions&lt;/strong&gt; extend atomicity across topics + offsets.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Coordinators&lt;/strong&gt; maintain transaction state.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Isolation levels&lt;/strong&gt; let consumers choose between speed (&lt;code&gt;read_uncommitted&lt;/code&gt;) and safety (&lt;code&gt;read_committed&lt;/code&gt;).
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Enable &lt;strong&gt;enable.idempotence=true&lt;/strong&gt; by default and use &lt;strong&gt;transactions&lt;/strong&gt; (transactional.id + sendOffsetsToTransaction) only when strict exactly-once guarantees across topics and offsets are required.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  12. Deployment
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Cluster Topology and Roles
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Separate &lt;strong&gt;controller&lt;/strong&gt; and &lt;strong&gt;broker&lt;/strong&gt; roles on dedicated nodes for production-scale clusters.
&lt;/li&gt;
&lt;li&gt;Run a &lt;strong&gt;controller-only quorum&lt;/strong&gt; of 3 or 5 nodes.

&lt;ul&gt;
&lt;li&gt;Three controllers are sufficient for moderate clusters.
&lt;/li&gt;
&lt;li&gt;Five controllers are preferred for larger clusters or higher availability needs.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Use &lt;strong&gt;broker-only nodes&lt;/strong&gt; for the data plane (producers and consumers).
&lt;/li&gt;

&lt;li&gt;Deploy at least three brokers and configure &lt;code&gt;replication.factor ≥ 3&lt;/code&gt; for critical topics.
&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Storage and Disks
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;strong&gt;JBOD (Just a Bunch of Disks)&lt;/strong&gt; — no RAID. Present disks individually to brokers and let Kafka handle replication.
&lt;/li&gt;
&lt;li&gt;Prefer the &lt;strong&gt;XFS filesystem&lt;/strong&gt; tuned for large files; mount broker volumes with &lt;code&gt;noatime&lt;/code&gt; (or &lt;code&gt;relatime&lt;/code&gt; if atime tracking is required).
&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;HDDs&lt;/strong&gt; on brokers for high sequential throughput and cost efficiency.
Consider &lt;strong&gt;SSDs/NVMe&lt;/strong&gt; for controller nodes (metadata logs) or if your workloads involve heavy random reads or strict latency SLAs.&lt;/li&gt;
&lt;li&gt;Tune &lt;code&gt;log.segment.bytes&lt;/code&gt; and retention policies to manage the number of segments and control &lt;code&gt;mmap&lt;/code&gt; usage.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Memory, Heap, and OS Tuning
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Keep broker JVM heap &lt;strong&gt;small and fixed&lt;/strong&gt; (typically 4–8 GB). Leave the remaining RAM for the OS page cache.
&lt;/li&gt;
&lt;li&gt;Apply the &lt;strong&gt;RAM sizing rule&lt;/strong&gt;: provision enough RAM to buffer approximately 30 seconds of peak ingest throughput in the page cache.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;&lt;br&gt;
If ingest is 300 MB/s, you want ~9 GB RAM just for cache.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Formula&lt;/strong&gt;&lt;br&gt;
Required RAM for cache ≈ (ingest throughput in MB/s) × 30 seconds&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Raise &lt;code&gt;vm.max_map_count&lt;/code&gt; for large clusters with many partitions or segments (e.g., set to 262144 or higher when required).&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Formula&lt;/strong&gt;&lt;br&gt;
required_vm.max_map_count ≈ partitions_per_broker × segments_per_partition × 2&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Increase file descriptor limits (&lt;code&gt;ulimit -n&lt;/code&gt;) to at least 100k.
&lt;/li&gt;
&lt;li&gt;For networking, provision &lt;strong&gt;10Gbps NICs&lt;/strong&gt; for high-throughput clusters and tune socket buffers for cross–data center replication.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Availability, Replication, and Durability
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Configure &lt;code&gt;min.insync.replicas ≥ 2&lt;/code&gt; when &lt;code&gt;replication.factor = 3&lt;/code&gt; to ensure durability even if one replica fails.
&lt;/li&gt;
&lt;li&gt;Require producers to use &lt;code&gt;acks=all&lt;/code&gt; for critical topics to ensure writes are fully replicated before acknowledgment.
&lt;/li&gt;
&lt;li&gt;Enable &lt;strong&gt;rack awareness&lt;/strong&gt; (&lt;code&gt;broker.rack&lt;/code&gt;) so replicas are distributed across racks or availability zones for better fault tolerance.
&lt;/li&gt;
&lt;li&gt;Consider &lt;strong&gt;tiered storage&lt;/strong&gt; (e.g., S3 or HDFS) for offloading cold data while keeping hot data local to brokers.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Security and Networking
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Enable &lt;strong&gt;TLS encryption&lt;/strong&gt; for both client–broker and inter-broker communication.
&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;SASL authentication&lt;/strong&gt; (SCRAM, mTLS, or GSSAPI depending on your environment).
&lt;/li&gt;
&lt;li&gt;Apply &lt;strong&gt;Kafka ACLs&lt;/strong&gt; to enforce least-privilege access control.
&lt;/li&gt;
&lt;li&gt;Restrict broker ports to trusted networks and place brokers/controllers in &lt;strong&gt;private subnets&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Operations, Monitoring, and Alerting
&lt;/h3&gt;

&lt;p&gt;Kafka’s monitoring flow begins with JMX exposing internal metrics, which are collected by a Prometheus exporter and visualized through Grafana dashboards for real-time tracking and alerting.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Key Metrics to Track&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Under-replicated or offline partitions&lt;/li&gt;
&lt;li&gt;Request latency across produce and fetch paths&lt;/li&gt;
&lt;li&gt;ISR size fluctuations and consumer lag&lt;/li&gt;
&lt;li&gt;Disk usage and I/O saturation&lt;/li&gt;
&lt;li&gt;GC pause duration and frequency&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Critical Alerts&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shrinking ISR or under-replicated partitions.
&lt;/li&gt;
&lt;li&gt;Offline or missing replicas.
&lt;/li&gt;
&lt;li&gt;Disk pressure or high utilization.
&lt;/li&gt;
&lt;li&gt;Long GC pauses.
&lt;/li&gt;
&lt;li&gt;Frequent rebalances.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  13. Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Kafka is not just a queue&lt;/strong&gt; : it’s a &lt;strong&gt;distributed event streaming platform&lt;/strong&gt; for high-throughput, real-time data pipelines.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Core roles&lt;/strong&gt; : Producers publish, Consumers subscribe, Topics organize, and Partitions enable horizontal scalability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Immutable, ordered logs&lt;/strong&gt; : guarantee replayable data streams and predictable processing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Replication and ISR&lt;/strong&gt; : leaders handle writes, followers stay synchronized to ensure fault tolerance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;KRaft replaces ZooKeeper&lt;/strong&gt; : simplifying cluster metadata management and deployment complexity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance is filesystem-driven&lt;/strong&gt; : sequential disk I/O, OS page cache, and batching give Kafka exceptional throughput.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exactly-once semantics (EOS)&lt;/strong&gt; : achieved through idempotent + transactional producers combined with committed offsets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Production readiness&lt;/strong&gt; : comes from careful tuning: partitions, replication factor, monitoring, and security controls.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  14. Conclusion
&lt;/h2&gt;

&lt;p&gt;Kafka has become the backbone of modern data systems. Its &lt;strong&gt;distributed log architecture&lt;/strong&gt; delivers scalability, fault tolerance, and speed—making it ideal for event-driven microservices, real-time analytics, and data pipelines.  &lt;/p&gt;

&lt;p&gt;By understanding &lt;strong&gt;core concepts&lt;/strong&gt; (topics, partitions, logs, replication, controllers) and applying &lt;strong&gt;best practices&lt;/strong&gt; in deployment and tuning, you can build &lt;strong&gt;robust, scalable, and future-proof systems&lt;/strong&gt; powered by Kafka.  &lt;/p&gt;




&lt;h2&gt;
  
  
  Appendix: Demo Project
&lt;/h2&gt;

&lt;p&gt;To complement the concepts explored in this article, I’ve built a hands-on demo project that puts Kafka’s architecture and transactional patterns into practice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub Repository:&lt;/strong&gt; &lt;em&gt;&lt;a href="https://github.com/arata-x/springboot-kafka-cluster" rel="noopener noreferrer"&gt;Spring Boot Kafka Cluster&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This project showcases a production-grade Kafka setup running in &lt;strong&gt;KRaft&lt;/strong&gt; mode, integrated with &lt;strong&gt;Spring Boot&lt;/strong&gt; and &lt;strong&gt;PostgreSQL&lt;/strong&gt;. It includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A multi-node Kafka cluster with &lt;strong&gt;3 controllers&lt;/strong&gt; and &lt;strong&gt;3 brokers&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;A RESTful producer service that publishes events to Kafka&lt;/li&gt;
&lt;li&gt;Three consumer services demonstrating:

&lt;ul&gt;
&lt;li&gt;Manual acknowledgment&lt;/li&gt;
&lt;li&gt;Kafka transactions&lt;/li&gt;
&lt;li&gt;Database transactions&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;A PostgreSQL-backed persistence layer&lt;/li&gt;

&lt;li&gt;Docker Compose orchestration for easy startup&lt;/li&gt;

&lt;li&gt;Scripts for testing, error simulation, and direct Kafka publishing&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Whether you're exploring offset management, transactional guarantees, or deployment strategies, this demo gives you a practical playground to experiment with real-world Kafka patterns.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Use it as a reference, a starting point, or a sandbox to deepen your Kafka mastery.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>kafka</category>
    </item>
    <item>
      <title>Kafka Made Simple: A Hands-On Quickstart with Docker and Spring Boot</title>
      <dc:creator>Arata</dc:creator>
      <pubDate>Sat, 20 Sep 2025 07:56:42 +0000</pubDate>
      <link>https://forem.com/aratax/kafka-made-simple-a-hands-on-quickstart-with-docker-and-spring-boot-180i</link>
      <guid>https://forem.com/aratax/kafka-made-simple-a-hands-on-quickstart-with-docker-and-spring-boot-180i</guid>
      <description>&lt;p&gt;Apache Kafka is a distributed, durable, real-time event streaming platform. It goes beyond a message queue by providing scalability, persistence, and stream processing capabilities.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll quickly spin up Kafka with Docker, explore it with CLI tools, and integrate it into a Spring Boot application.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. What is Kafka?
&lt;/h2&gt;

&lt;p&gt;Apache Kafka is a &lt;strong&gt;distributed, durable, real-time event streaming platform&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
It was originally developed at LinkedIn and is now part of the Apache Software Foundation.&lt;br&gt;&lt;br&gt;
Kafka is designed for &lt;strong&gt;high-throughput, low-latency data pipelines, streaming analytics, and event-driven applications&lt;/strong&gt;.  &lt;/p&gt;
&lt;h3&gt;
  
  
  What is an Event?
&lt;/h3&gt;

&lt;p&gt;An &lt;strong&gt;event&lt;/strong&gt; is simply a &lt;strong&gt;record of something that happened&lt;/strong&gt; in the system.&lt;br&gt;&lt;br&gt;
Each event usually includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Key&lt;/strong&gt; → identifier (e.g., user ID, order ID).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Value&lt;/strong&gt; → the payload (e.g., “order created with total = $50”).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Timestamp&lt;/strong&gt; → when the event occurred.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example event:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"order-123"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"customer"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Alice"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"total"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-09-19T10:15:00Z"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What is an Event Streaming Platform?
&lt;/h3&gt;

&lt;p&gt;An &lt;strong&gt;event streaming platform&lt;/strong&gt; is a system designed to handle continuous flows of data — or &lt;em&gt;events&lt;/em&gt; — in real time.&lt;br&gt;&lt;br&gt;
Instead of working in batches (processing data after the fact), it allows applications to &lt;strong&gt;react as events happen&lt;/strong&gt;.&lt;/p&gt;


&lt;h2&gt;
  
  
  2. What Kafka Can Do
&lt;/h2&gt;

&lt;p&gt;Kafka is more than a message queue—it's a real-time event backbone for modern systems.&lt;/p&gt;
&lt;h3&gt;
  
  
  Messaging Like a Message Queue
&lt;/h3&gt;

&lt;p&gt;Kafka decouples producers and consumers, enabling asynchronous communication between services.&lt;br&gt;
&lt;strong&gt;Example&lt;/strong&gt;:&lt;br&gt;
A banking system publishes transaction events to Kafka. Fraud detection, ledger updates, and notification services consume these events independently.&lt;/p&gt;
&lt;h3&gt;
  
  
  Event Streaming
&lt;/h3&gt;

&lt;p&gt;Kafka streams data in real time, allowing systems to react instantly.&lt;br&gt;
&lt;strong&gt;Example&lt;/strong&gt;:&lt;br&gt;
An insurance platform streams claim events to trigger automated validation, underwriting checks, and customer updates in real time.&lt;/p&gt;
&lt;h3&gt;
  
  
  Data Integration
&lt;/h3&gt;

&lt;p&gt;Kafka Connect bridges Kafka with databases, cloud storage, and analytics platforms.&lt;br&gt;
&lt;strong&gt;Example&lt;/strong&gt;:&lt;br&gt;
A semiconductor company streams sensor data from manufacturing equipment into a data lake for predictive maintenance and yield optimization.&lt;/p&gt;
&lt;h3&gt;
  
  
  Log Aggregation
&lt;/h3&gt;

&lt;p&gt;Kafka centralizes logs from multiple services for monitoring and analysis.&lt;br&gt;
&lt;strong&gt;Example&lt;/strong&gt;:&lt;br&gt;
An industrial automation system sends logs from PLCs and controllers to Kafka, where they’re consumed by a monitoring dashboard for anomaly detection.&lt;/p&gt;
&lt;h3&gt;
  
  
  Replayable History
&lt;/h3&gt;

&lt;p&gt;Kafka retains events for reprocessing or backfilling.&lt;br&gt;
&lt;strong&gt;Example&lt;/strong&gt;:&lt;br&gt;
An insurance company replays past policy events to train a model that predicts claim risk or customer churn. This avoids relying on static snapshots and gives the model a dynamic, time-aware view of behavior.&lt;/p&gt;
&lt;h3&gt;
  
  
  Scalable Microservices Communication
&lt;/h3&gt;

&lt;p&gt;Kafka handles high-throughput messaging across distributed services.&lt;br&gt;
&lt;strong&gt;Example&lt;/strong&gt;:&lt;br&gt;
A financial institution uses Kafka to coordinate customer onboarding, KYC checks, and account provisioning across multiple microservices.&lt;/p&gt;


&lt;h2&gt;
  
  
  3. Architecture
&lt;/h2&gt;

&lt;p&gt;Apache Kafka’s architecture is built for high throughput, fault tolerance, and horizontal scalability. At its core, Kafka relies on a log-based storage model and a distributed broker cluster.&lt;/p&gt;
&lt;h3&gt;
  
  
  Core Components
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Producer&lt;/strong&gt; → Publishes records (events/messages) to topics. Can be idempotent or transactional.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Topic&lt;/strong&gt; → Logical category/feed for messages. Divided into partitions for parallelism.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Partition&lt;/strong&gt; → Ordered, immutable commit log. Records have sequential offsets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Broker&lt;/strong&gt; → A Kafka server that stores partitions. Clusters have multiple brokers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Consumer&lt;/strong&gt; → Subscribes to topics and processes messages. Part of a consumer group for scaling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Controller&lt;/strong&gt; → Special broker role that manages metadata, leader election, and partition assignment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Replication&lt;/strong&gt; → Each partition has one leader and multiple followers in the ISR (in-sync replicas).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Data Flow
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Producers send records to brokers.&lt;/li&gt;
&lt;li&gt;Records are appended to the leader partition log.&lt;/li&gt;
&lt;li&gt;Followers replicate the leader’s log for durability.&lt;/li&gt;
&lt;li&gt;Consumers fetch records from leaders, tracking their offsets.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
   Architecture Diagram
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;               +-----------------+
               |    Producers    |
               +-----------------+
                   |    |    |
                   v    v    v
            +------------------------+
            |     Kafka Cluster      |
            |  +---------+           |
            |  | Broker 1|  &amp;lt;--------------- Partition 0 Leader
            |  +---------+           |
            |  | Broker 2|  &amp;lt;--------------- Partition 0 Follower
            |  +---------+           |
            |  | Broker 3|  &amp;lt;--------------- Partition 1 Leader
            |  +---------+           |
            +------------------------+
                   |    |    |
                   v    v    v
              +-------------------+
              |  Consumer Group   |
              |-------------------|
              | Consumer A → P0   |
              | Consumer B → P1   |
              +-------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. QuickStart with Docker
&lt;/h2&gt;

&lt;p&gt;This configuration sets up a single-node Kafka broker using the KRaft. It’s ideal for development, testing scenarios&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kafka&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kafka&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apache/kafka:4.1.0&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kafka&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_NODE_ID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_PROCESS_ROLES&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;broker,controller&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_LISTENERS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;BROKER://:9092,CONTROLLER://:9093&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_CONTROLLER_QUORUM_VOTERS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1@localhost:9093&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_CONTROLLER_LISTENER_NAMES&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CONTROLLER&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_INTER_BROKER_LISTENER_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;BROKER&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_LISTENER_SECURITY_PROTOCOL_MAP&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;BROKER:PLAINTEXT,CONTROLLER:PLAINTEXT&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_ADVERTISED_LISTENERS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;BROKER://localhost:9092&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_CLUSTER_ID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;kafka-1"&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_TRANSACTION_STATE_LOG_MIN_ISR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_LOG_DIRS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/kafka/data&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;kafka_data:/var/lib/kafka/data&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9092:9092"&lt;/span&gt;
&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kafka_data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to Run
&lt;/h2&gt;

&lt;p&gt;Start the Kafka container using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kafka will be available at localhost:9092 for producers and consumers, and internally at localhost:9093 for controller communication.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. Kafka CLI
&lt;/h2&gt;

&lt;p&gt;Before running Kafka commands, log into the Kafka container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker container &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; localhost bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create Topic
&lt;/h3&gt;

&lt;p&gt;Create a topic named quickstart with one partition and a replication factor of 1:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/opt/kafka/bin/kafka-topics.sh &lt;span class="nt"&gt;--create&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--bootstrap-server&lt;/span&gt; localhost:9092 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--replication-factor&lt;/span&gt; 1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--partitions&lt;/span&gt; 1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--topic&lt;/span&gt; quickstart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  List Topic
&lt;/h3&gt;

&lt;p&gt;Check all existing topics:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/opt/kafka/bin/kafka-topics.sh &lt;span class="nt"&gt;--list&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--bootstrap-server&lt;/span&gt; localhost:9092
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Consume Message
&lt;/h3&gt;

&lt;p&gt;Read messages from the order topic starting from the beginning:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/opt/kafka/bin/kafka-console-consumer.sh &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--bootstrap-server&lt;/span&gt; localhost:9092 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--topic&lt;/span&gt; quickstart &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--from-beginning&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Send Message
&lt;/h3&gt;

&lt;p&gt;You can send messages to the quickstart topic using either direct input or a file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option A: Send a single message
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'This is Event 1'&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
/opt/kafka/bin/kafka-console-producer.sh &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--bootstrap-server&lt;/span&gt; localhost:9092 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--topic&lt;/span&gt; quickstart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Option B: Send multiple messages from a file
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo 'This is Event 2' &amp;gt; messages.txt
echo 'This is Event 3' &amp;gt;&amp;gt; messages.txt
cat messages.txt | \
/opt/kafka/bin/kafka-console-producer.sh \
  --bootstrap-server localhost:9092 \
  --topic quickstart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  5. Spring Boot Integration
&lt;/h2&gt;

&lt;p&gt;This configuration enables seamless integration between a Spring Boot application and an Apache Kafka broker. It defines both producer and consumer settings for message serialization, deserialization, and connection behavior.&lt;/p&gt;

&lt;h3&gt;
  
  
  pom.xml
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="c"&gt;&amp;lt;!-- spring-web --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.springframework.boot&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;spring-boot-starter-web&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;version&amp;gt;&lt;/span&gt;3.4.9&lt;span class="nt"&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
&lt;span class="c"&gt;&amp;lt;!-- kafka --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.springframework.kafka&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;spring-kafka&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;version&amp;gt;&lt;/span&gt;3.3.9&lt;span class="nt"&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
&lt;span class="c"&gt;&amp;lt;!-- Lombok(optional) --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.projectlombok&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;lombok&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.18.30&lt;span class="nt"&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;optional&amp;gt;&lt;/span&gt;true&lt;span class="nt"&gt;&amp;lt;/optional&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  applicaiton.yml
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spring&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kafka&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;bootstrap-servers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:9092&lt;/span&gt;
    &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;default-topic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;orders&lt;/span&gt;
    &lt;span class="na"&gt;consumer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;group-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quickstart-group&lt;/span&gt;
      &lt;span class="na"&gt;auto-offset-reset&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;latest&lt;/span&gt;
      &lt;span class="na"&gt;key-deserializer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;org.apache.kafka.common.serialization.StringDeserializer&lt;/span&gt;
      &lt;span class="na"&gt;value-deserializer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;org.springframework.kafka.support.serializer.JsonDeserializer&lt;/span&gt;
      &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;spring.json.trusted.packages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dev.aratax.messaging.kafka.model"&lt;/span&gt;
    &lt;span class="na"&gt;producer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;key-serializer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;org.apache.kafka.common.serialization.StringSerializer&lt;/span&gt;
      &lt;span class="na"&gt;value-serializer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;org.springframework.kafka.support.serializer.JsonSerializer&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Topic Setup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Bean&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;NewTopic&lt;/span&gt; &lt;span class="nf"&gt;defaultTopic&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;NewTopic&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"orders"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;short&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Event Model
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;OrderEvent&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;Status&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;BigDecimal&lt;/span&gt; &lt;span class="n"&gt;totalAmount&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;Instant&lt;/span&gt; &lt;span class="n"&gt;createdAt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Instant&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;now&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;createdBy&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;enum&lt;/span&gt; &lt;span class="nc"&gt;Status&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="no"&gt;IN_PROGRESS&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
        &lt;span class="no"&gt;COMPLETED&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
        &lt;span class="no"&gt;CANCELLED&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Producer Example
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@RestController&lt;/span&gt;
&lt;span class="nd"&gt;@RequestMapping&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/api"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nd"&gt;@RequiredArgsConstructor&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;OrderEventController&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;KafkaTemplate&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;OrderEvent&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;kafkaTemplate&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="nd"&gt;@PostMapping&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/orders"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nd"&gt;@RequestBody&lt;/span&gt; &lt;span class="nc"&gt;OrderEvent&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setId&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="no"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;randomUUID&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;toString&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
        &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setCreatedAt&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Instant&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;now&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
        &lt;span class="n"&gt;kafkaTemplate&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sendDefault&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getId&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s"&gt;"Order sent to Kafka"&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Consumer Example
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Component&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;OrderEventsListener&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nd"&gt;@KafkaListener&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;topics&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"orders"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;handle&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;OrderEvent&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;out&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;println&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Received order: "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  6. Demo Project
&lt;/h2&gt;

&lt;p&gt;I built a demo project using Spring Boot and Kafka to demonstrate basic producer/consumer functionality. &lt;br&gt;
Check it out on GitHub: &lt;a href="https://github.com/arata-x/springboot-kafka-quickstart" rel="noopener noreferrer"&gt;springboot-kafka-quickstart&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  7. Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Kafka is more than a message queue—it's a scalable, durable event streaming platform.&lt;/li&gt;
&lt;li&gt;Events are central to Kafka’s architecture, enabling real-time data flow across systems.&lt;/li&gt;
&lt;li&gt;Docker makes setup easy, allowing you to spin up Kafka locally for development and testing.&lt;/li&gt;
&lt;li&gt;Kafka CLI tools help you explore topics, produce messages, and consume events interactively.&lt;/li&gt;
&lt;li&gt;Spring Boot integration simplifies Kafka usage with built-in support for producers and consumers.&lt;/li&gt;
&lt;li&gt;Real-world use cases span industries like banking, insurance, semiconductor, and automation.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  8. Conclusion
&lt;/h2&gt;

&lt;p&gt;Apache Kafka empowers developers to build reactive, event-driven systems with ease. Whether you're streaming financial transactions, processing insurance claims, or monitoring factory equipment, Kafka provides the backbone for scalable, real-time communication.&lt;/p&gt;

&lt;p&gt;With Docker and Spring Boot, you can get started in minutes—no complex setup required. This quickstart gives you everything you need to explore Kafka hands-on and begin building production-grade event pipelines.&lt;/p&gt;

&lt;p&gt;Ready to go deeper? Try explore its design/implementation, stream processing, or Kafka Connect integrations next.&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>docker</category>
      <category>springboot</category>
    </item>
    <item>
      <title>Redis Sentinel Made Simple: Hands-On High Availability</title>
      <dc:creator>Arata</dc:creator>
      <pubDate>Sun, 24 Aug 2025 11:56:02 +0000</pubDate>
      <link>https://forem.com/aratax/redis-sentinel-made-simple-hands-on-high-availability-25e8</link>
      <guid>https://forem.com/aratax/redis-sentinel-made-simple-hands-on-high-availability-25e8</guid>
      <description>&lt;p&gt;High availability is no longer a luxury — it’s a survival kit for modern applications. Databases crash, servers die, containers get killed (sometimes by accident, sometimes by design). In the world of Redis, &lt;strong&gt;Sentinel&lt;/strong&gt; is the quiet guardian that keeps your cache cluster alive when chaos happens.  &lt;/p&gt;

&lt;p&gt;In this article, I’ll walk you through Redis Sentinel step by step, with a runnable Docker demo and a Spring Boot integration example. By the end, you’ll see failover happening live — and how your application can recover without manual intervention.  &lt;/p&gt;




&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Why does Redis Sentinel matter?
&lt;/h3&gt;

&lt;p&gt;Picture this: you’ve got Redis set up with one master and a couple of replicas. Everything’s smooth… until the master suddenly crashes. Now what? Who decides which replica should take over? Who makes sure your clients know where to connect?&lt;br&gt;
👉 That’s exactly the job Sentinel handles for you.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; &lt;strong&gt;Monitors&lt;/strong&gt; your Redis instances.
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Notifies&lt;/strong&gt; you when something goes wrong.
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Automatically promotes&lt;/strong&gt; a replica to master.
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Redirects clients&lt;/strong&gt; to the new master.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sentinel is the difference between a cache outage and a smooth failover.  &lt;/p&gt;


&lt;h2&gt;
  
  
  2. What is Redis Sentinel?
&lt;/h2&gt;

&lt;p&gt;At its core, Redis Sentinel is a distributed system that provides:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; Monitoring – constantly checking whether your master and replicas are alive.
&lt;/li&gt;
&lt;li&gt; Notification – alerting operators (or systems) when something goes wrong.
&lt;/li&gt;
&lt;li&gt; Automatic Failover – promoting a replica when the master is unavailable.
&lt;/li&gt;
&lt;li&gt; Client Redirection – letting apps connect to the new master automatically.
&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  3 . Sentinel Architecture
&lt;/h2&gt;

&lt;p&gt;A Sentinel deployment usually includes multiple Sentinel nodes plus your Redis master and replicas. Sentinels work together, reaching &lt;strong&gt;quorum&lt;/strong&gt; before deciding a master is truly dead.  &lt;/p&gt;

&lt;p&gt;Key concepts:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; &lt;strong&gt;SDOWN (Subjectively Down):&lt;/strong&gt; One Sentinel thinks the master is down.
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;ODOWN (Objectively Down):&lt;/strong&gt; Enough Sentinels agree the master is down.
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Replica Priority:&lt;/strong&gt; Determines which replica should be promoted first.
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Deployment Diagram
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+-------------------+       +-------------------+
|   Sentinel #1     |       |   Sentinel #2     |
+-------------------+       +-------------------+
           \                     /
            \                   /
             \   Quorum Vote   /
              \               /
            +-------------------+
            |   Sentinel #3     |
            +-------------------+
                   |
                   v
            +-------------------+
            | Redis Master      |
            +-------------------+
              /          \
             v            v
   +----------------+   +----------------+
   | Redis Replica1 |   | Redis Replica2 |
   +----------------+   +----------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Setting Up Redis Sentinel
&lt;/h2&gt;

&lt;p&gt;We use Docker Compose with one master, two replicas, and three Sentinels.  &lt;/p&gt;
&lt;h3&gt;
  
  
  Redis Sentinel Config
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sentinel announce-ip "127.0.0.1"
sentinel announce-port 26379
# sentinel with version above 6.2 can resolve host names, but this is not enabled by default.
sentinel resolve-hostnames yes
# Monitor master named "mymaster" at 127.0.0.1(or domain name):6379 with quorum of 2
sentinel monitor mymaster 127.0.0.1 6379 2
# Master is considered down after 5 seconds of no response
sentinel down-after-milliseconds mymaster 5000
# Failover timeout 18 seconds
sentinel failover-timeout mymaster 18000

##Below line 'Generated by CONFIG REWRITE 'controlled by Redis Sentinel(Config file should be writable)
# Generated by CONFIG REWRITE 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Ways to Run Sentinel:
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;redis-sentinel /etc/redis/sentinel.conf
# or
redis-server /etc/redis/sentinel.conf --sentinel
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Redis CLI Useful commands:
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Start Sentinel's monitoring.
SENTINEL MONITOR &amp;lt;master name&amp;gt; 
#Stop Sentinel's monitoring.
SENTINEL REMOVE &amp;lt;master name&amp;gt;
#Set Sentinel's monitoring configuration. 
SENTINEL SET &amp;lt;master name&amp;gt; &amp;lt;option&amp;gt; &amp;lt;value&amp;gt;
#(&amp;gt;= 5.0) Show a list of replicas for this master, and their state.
SENTINEL REPLICAS &amp;lt;master name&amp;gt; 
#Show a list of sentinel instances for this master, and their state.
SENTINEL SENTINELS &amp;lt;master name&amp;gt;
#Force a failover as if the master was not reachable, and without asking for agreement to other Sentinels 
#(however a new version of the configuration will be published so that the other Sentinels will update their configurations.
#That's called 'Configuration propagation'
SENTINEL FAILOVER &amp;lt;master name&amp;gt;
#Display information by Role.
INFO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Docker Compse :
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  redis-sentinel-1:
    image: bitnami/redis-sentinel:8.0.3
    container_name: redis-sentinel-1
    ports:
      # Sentinel, Docker, NAT, and possible issues. Set port-mapping 1:1
      - "26379:26379"
    environment:
      - ALLOW_EMPTY_PASSWORD=yes   
    volumes:
      # Use with caution regarding permissions.
      - redis-sentinel-1-data:/bitnami/redis-sentinel
      - ./redis-sentinel-1:/usr/local/etc/redis-sentinel
    # Sentinel, Docker, NAT, and possible issues. Use host for maximum compatibility.
    network_mode: host
    depends_on:
      - redis-master
      - redis-replica-1
      - redis-replica-2
    restart: unless-stopped
    command: ["redis-sentinel", "/usr/local/etc/redis-sentinel/sentinel.conf"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  5. Redis Docker Demo
&lt;/h2&gt;

&lt;p&gt;Clone the demo project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/arata-x/redis-ha.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker Setup/Run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd redis-ha/docker/redis/sentinel
docker-compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Simulate master crash:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;kill &lt;/span&gt;redis-master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Sentinels will detect the failure and promote a replica to do the Failover.  &lt;/p&gt;




&lt;h2&gt;
  
  
  6. Spring Boot Integration
&lt;/h2&gt;

&lt;p&gt;Spring Boot supports Sentinel natively via &lt;code&gt;spring-boot-starter-data-redis&lt;/code&gt;. Here’s how to configure it.&lt;/p&gt;

&lt;h3&gt;
  
  
  pom.xml
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
  &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
  &amp;lt;artifactId&amp;gt;spring-boot-starter-data-redis-reactive&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  application.yml
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring:
  data:
    redis:
      sentinel:
        master: localhost
        nodes:
          - redis-sentinel-1:26379
          - redis-sentinel-2:26379
          - redis-sentinel-3:26379
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Spring Boot Config for Pub/Sub messages（Optional）
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  @Bean(destroyMethod = "shutdown")
  public RedisClient sentinelClient() {
    return RedisClient.create("redis://127.0.0.1:26379");
  }

  @Bean(destroyMethod = "close")
  public StatefulRedisPubSubConnection&amp;lt;String, String&amp;gt; sentinelPubSub(RedisClient client) {
    var conn = client.connectPubSub();
    conn.addListener(new RedisPubSubAdapter&amp;lt;&amp;gt;() {
      @Override public void message(String channel, String message) {
        log.info("Sentinel event [{}] {}", channel, message);
      }
    });

    // subscribe to key Sentinel events (or use psubscribe("*") to get all)
    conn.sync().subscribe(
        "+switch-master",        // master changed
        "+sdown", "-sdown",      // subjective down / cleared
        "+odown", "-odown",      // objective down / cleared (masters only)
        "+try-failover",
        "+failover-state-*"
    );
    return conn;
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way, clients automatically reconnect after failover. And log Sentinel events.&lt;/p&gt;




&lt;h2&gt;
  
  
  7. Testing Failover &amp;amp; Logs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Failover Timeline
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;t0: Master alive
t1: Master killed  -&amp;gt; SDOWN
t2: Quorum reached -&amp;gt; ODOWN
t3: Leader elected -&amp;gt; VOTE
t4: Master elected -&amp;gt; PROMOTE
t5: New master active -&amp;gt; CLIENTS REDIRECT
t6: Replica detcted -&amp;gt; SLAVE
t7: Old master back -&amp;gt; SLAVE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Docker logs:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;redis-sentinel-1  | 1:X 24 Aug 2025 01:29:56.652 * Sentinel ID is 45f2090cc345fd2a0a9afad89d45d3c212816390
redis-sentinel-3  | 1:X 24 Aug 2025 01:29:56.670 * Sentinel ID is 72098a7942ff006106511dbb0db3044b00fa5473
redis-sentinel-2  | 1:X 24 Aug 2025 01:29:56.690 * Sentinel ID is b87c2be6edf6192e03783f1ed1647af7fa2b51f6
# Simulate the master down via command 'docker container kill redis-master' and the Failover will start.
redis-sentinel-1  | 1:X 24 Aug 2025 01:30:32.047 # +sdown master mymaster redis-master 6379
redis-sentinel-2  | 1:X 24 Aug 2025 01:30:32.067 # +sdown master mymaster redis-master 6379
redis-sentinel-3  | 1:X 24 Aug 2025 01:30:32.107 # +sdown master mymaster redis-master 6379
redis-sentinel-2  | 1:X 24 Aug 2025 01:30:32.144 # +odown master mymaster redis-master 6379 #quorum 2/2
redis-sentinel-2  | 1:X 24 Aug 2025 01:30:32.144 # +try-failover master mymaster redis-master 6379
redis-sentinel-2  | 1:X 24 Aug 2025 01:30:32.151 # +vote-for-leader b87c2be6edf6192e03783f1ed1647af7fa2b51f6 1
redis-sentinel-3  | 1:X 24 Aug 2025 01:30:32.166 # +vote-for-leader b87c2be6edf6192e03783f1ed1647af7fa2b51f6 1
redis-sentinel-1  | 1:X 24 Aug 2025 01:30:32.167 # +vote-for-leader b87c2be6edf6192e03783f1ed1647af7fa2b51f6 1
redis-sentinel-2  | 1:X 24 Aug 2025 01:30:33.215 # +promoted-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster redis-master 6379
redis-sentinel-2  | 1:X 24 Aug 2025 01:30:32.244 # +elected-leader master mymaster redis-master 6379
redis-sentinel-2  | 1:X 24 Aug 2025 01:30:32.244 # +failover-state-select-slave master mymaster redis-master 6379
redis-sentinel-2  | 1:X 24 Aug 2025 01:30:32.299 # +selected-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster redis-master 6379
redis-sentinel-2  | 1:X 24 Aug 2025 01:30:32.299 * +failover-state-send-slaveof-noone slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster redis-master 6379
redis-sentinel-3  | 1:X 24 Aug 2025 01:30:33.263 # +switch-master mymaster redis-master 6379 127.0.0.1 6381
redis-sentinel-3  | 1:X 24 Aug 2025 01:30:33.264 * +slave slave redis-master:6379 redis-master 6379 @ mymaster 127.0.0.1 6381 
# Restore master via command 'docker container start redis-master' and master will be the replica.
redis-sentinel-2  | 1:X 24 Aug 2025 01:30:34.096 # -sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6380
redis-master      | 1:S 24 Aug 2025 01:30:34.236 * Before turning into a replica, using my own master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
redis-master      | 1:S 24 Aug 2025 01:30:34.236 * Connecting to MASTER 127.0.0.1:6380
redis-sentinel-1  | 1:X 24 Aug 2025 01:30:34.236 * +convert-to-slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6380
redis-replica-1   | 1:M 24 Aug 2025 01:30:34.447 * Synchronization with replica 127.0.0.1:6379 succeeded
redis-master      | 1:S 24 Aug 2025 01:30:34.447 * MASTER &amp;lt;-&amp;gt; REPLICA sync: Successfully streamed replication buffer into the db (0 bytes in total)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Redis Event List
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;+slave  -- A new replica was detected and attached.&lt;/li&gt;
&lt;li&gt;+sdown  -- The specified instance is now in Subjectively Down state.&lt;/li&gt;
&lt;li&gt;+odown  -- The specified instance is now in Objectively Down state.&lt;/li&gt;
&lt;li&gt;+try-failover  -- New failover in progress, waiting to be elected by the majority.&lt;/li&gt;
&lt;li&gt;+elected-leader  -- Won the election for the specified epoch, can do the failover.&lt;/li&gt;
&lt;li&gt;+failover-state-select-slave  -- New failover state is select-slave: we are trying to find a suitable replica for promotion.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Spring Boot log by Redis pub/sub:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2025-08-24T01:34:46.946+08:00  INFO 44256 --- [redis-reactive-demo] [ioEventLoop-7-1] d.a.redis.config.RedisConfigSentinel     : Sentinel event [+sdown] master mymaster 127.0.0.1 6379
2025-08-24T01:34:48.055+08:00  INFO 44256 --- [redis-reactive-demo] [ioEventLoop-7-1] d.a.redis.config.RedisConfigSentinel     : Sentinel event [+odown] master mymaster 127.0.0.1 6379 #quorum 3/2
2025-08-24T01:34:48.176+08:00  INFO 44256 --- [redis-reactive-demo] [ioEventLoop-7-1] d.a.redis.config.RedisConfigSentinel     : Sentinel event [+switch-master] mymaster 127.0.0.1 6379 127.0.0.1 6381
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  8. Best Practices
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Run &lt;strong&gt;at least 3 Sentinels&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Distribute Sentinels across nodes for resilience.
&lt;/li&gt;
&lt;li&gt;Tune &lt;code&gt;failover-timeout&lt;/code&gt; and &lt;code&gt;down-after-milliseconds&lt;/code&gt;.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  10. Final Thoughts
&lt;/h2&gt;

&lt;p&gt;🚦 Think of Redis Sentinel as your system’s insurance policy.&lt;br&gt;
Most of the time, you’ll never notice it quietly standing guard in the background. But the moment your master node takes a dive, Sentinel steps in to keep traffic flowing — and you’ll be very glad it was there all along.&lt;/p&gt;

&lt;p&gt;👉 Use Sentinel when you want simple, lightweight high availability.&lt;br&gt;
It doesn’t complicate your setup and gets the job done for most HA needs.&lt;/p&gt;

&lt;p&gt;⚡ But if your workload demands both horizontal scaling (sharding) and HA, that’s where Redis Cluster shines. Sentinel won’t replace Cluster — they solve different problems.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔗Demo project: &lt;a href="https://github.com/arata-x/redis-ha" rel="noopener noreferrer"&gt;Redis Sentinel&lt;/a&gt;
&lt;/h2&gt;

</description>
      <category>redis</category>
      <category>springboot</category>
    </item>
    <item>
      <title>Redis Replication Made Simple: With Spring Boot Integration</title>
      <dc:creator>Arata</dc:creator>
      <pubDate>Sat, 09 Aug 2025 15:54:45 +0000</pubDate>
      <link>https://forem.com/aratax/redis-replication-made-simple-with-spring-boot-integration-18mi</link>
      <guid>https://forem.com/aratax/redis-replication-made-simple-with-spring-boot-integration-18mi</guid>
      <description>&lt;p&gt;Imagine it’s 3 AM. Your Redis server—yes, the one holding all your app’s session data—just crashed. Your team’s phones are buzzing. Users are locked out, and panic is rising.  &lt;/p&gt;

&lt;p&gt;What if I told you this nightmare could be avoided with a simple feature built right into Redis? Enter &lt;strong&gt;Redis replication&lt;/strong&gt;—your built-in safeguard for data availability, read scaling, and peace of mind.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔑 What Is Redis Replication?
&lt;/h2&gt;

&lt;p&gt;At its core, &lt;strong&gt;Redis replication&lt;/strong&gt; enables a single Redis instance (the &lt;strong&gt;primary&lt;/strong&gt;) to automatically copy its data to one or more &lt;strong&gt;replicas&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;primary&lt;/strong&gt; handles all write operations.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Replicas&lt;/strong&gt; stay in sync and serve read requests, reducing the load on the primary.
&lt;/li&gt;
&lt;li&gt;If the primary fails, replicas can quickly take over.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This fundamental setup lays the groundwork for high availability and scaling in Redis environments.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚙️ How Does It Work?
&lt;/h2&gt;

&lt;p&gt;Redis replication works in three key stages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Initial Sync:&lt;/strong&gt; A replica requests a full snapshot (RDB) from the primary, loads it, and applies any updates.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Command Streaming:&lt;/strong&gt; Once synced, the replica continuously receives write commands from the primary to stay current.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Partial Resync (PSYNC2):&lt;/strong&gt; If a replica temporarily disconnects, it resumes from where it left off using Redis’s backlog buffer—avoiding a full resync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This process is &lt;strong&gt;asynchronous&lt;/strong&gt;, which means replicas may lag slightly but offer high throughput.&lt;/p&gt;




&lt;h2&gt;
  
  
  🖥 Setting Up Replication (Primary + Two Replicas)
&lt;/h2&gt;

&lt;p&gt;Here’s how to launch a simple master-replica setup locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Start Primary&lt;/span&gt;
redis-server &lt;span class="nt"&gt;--port&lt;/span&gt; 6379

&lt;span class="c"&gt;# Start Replica 1&lt;/span&gt;
redis-server &lt;span class="nt"&gt;--port&lt;/span&gt; 6380 &lt;span class="nt"&gt;--replicaof&lt;/span&gt; 127.0.0.1 6379

&lt;span class="c"&gt;# Start Replica 2&lt;/span&gt;
redis-server &lt;span class="nt"&gt;--port&lt;/span&gt; 6381 &lt;span class="nt"&gt;--replicaof&lt;/span&gt; 127.0.0.1 6379
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, reads can be routed to replicas while writes continue to flow to the primary.&lt;/p&gt;




&lt;h2&gt;
  
  
  🆕 Chained Replication (Replica of a Replica)
&lt;/h2&gt;

&lt;p&gt;Beyond basic replication, Redis supports &lt;strong&gt;chained replication&lt;/strong&gt;, where a replica can act as a source for another replica.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Use It?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reduce primary load:&lt;/strong&gt; Only one replica pulls directly from the primary.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regional optimization:&lt;/strong&gt; Place replicas closer to users while syncing through a nearer node.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better bandwidth usage:&lt;/strong&gt; Ideal for distributed or high-latency networks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Primary&lt;/span&gt;
redis-server &lt;span class="nt"&gt;--port&lt;/span&gt; 6379

&lt;span class="c"&gt;# Replica 1 (syncs from primary)&lt;/span&gt;
redis-server &lt;span class="nt"&gt;--port&lt;/span&gt; 6380 &lt;span class="nt"&gt;--replicaof&lt;/span&gt; 127.0.0.1 6379

&lt;span class="c"&gt;# Replica 2 (syncs from Replica 1)&lt;/span&gt;
redis-server &lt;span class="nt"&gt;--port&lt;/span&gt; 6381 &lt;span class="nt"&gt;--replicaof&lt;/span&gt; 127.0.0.1 6380
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  ✒Deployment Diagram
&lt;/h2&gt;

&lt;p&gt;This diagram shows a primary node with two direct replicas and one chained replica.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;          +--------+
          | Server |
          +---+----+
              |
             WRITE
              v
          +--------+
          | Master |
          +---+----+
          /        \
     SYNC/          \SYNC
        v            v
+-------+--+    +----+------+
| Replica  |    |  Replica  |
+----+-----+    +-----+-----+
     |
 CHAINED SYNC
     v
+----+-----+
| Replica  |
+----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  ⚡ Diskless Replication
&lt;/h2&gt;

&lt;p&gt;To further speed up initial synchronization, enable &lt;strong&gt;diskless replication&lt;/strong&gt;, which streams snapshots directly to replicas:&lt;/p&gt;

&lt;p&gt;redis.conf (master)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="n"&gt;repl&lt;/span&gt;-&lt;span class="n"&gt;diskless&lt;/span&gt;-&lt;span class="n"&gt;sync&lt;/span&gt; &lt;span class="n"&gt;yes&lt;/span&gt;
&lt;span class="n"&gt;repl&lt;/span&gt;-&lt;span class="n"&gt;diskless&lt;/span&gt;-&lt;span class="n"&gt;sync&lt;/span&gt;-&lt;span class="n"&gt;delay&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;redis.conf (replica)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="n"&gt;replicaof&lt;/span&gt; &lt;span class="n"&gt;redis&lt;/span&gt;-&lt;span class="n"&gt;master&lt;/span&gt; &lt;span class="m"&gt;6379&lt;/span&gt;
&lt;span class="n"&gt;replica&lt;/span&gt;-&lt;span class="n"&gt;read&lt;/span&gt;-&lt;span class="n"&gt;only&lt;/span&gt; &lt;span class="n"&gt;yes&lt;/span&gt;
&lt;span class="n"&gt;repl&lt;/span&gt;-&lt;span class="n"&gt;diskless&lt;/span&gt;-&lt;span class="n"&gt;load&lt;/span&gt; &lt;span class="n"&gt;on&lt;/span&gt;-&lt;span class="n"&gt;empty&lt;/span&gt;-&lt;span class="n"&gt;db&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This avoids writing intermediate files to disk and is ideal for large datasets or high-performance environments.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔧 Spring Boot with Redis Replicas
&lt;/h2&gt;

&lt;p&gt;Let’s integrate this into a Spring Boot project for practical use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dependency:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.springframework.boot&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;spring-boot-starter-data-redis&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configuration:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Bean&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;LettuceConnectionFactory&lt;/span&gt; &lt;span class="nf"&gt;redisConnectionFactory&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;RedisStaticMasterReplicaConfiguration&lt;/span&gt; &lt;span class="n"&gt;masterReplicaConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
            &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;RedisStaticMasterReplicaConfiguration&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"127.0.0.1"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;6379&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;masterReplicaConfig&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addNode&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"127.0.0.1"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;6380&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;masterReplicaConfig&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addNode&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"127.0.0.1"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;6381&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;masterReplicaConfig&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setPassword&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;RedisPassword&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;of&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"myRedisPass"&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;

    &lt;span class="nc"&gt;LettuceClientConfiguration&lt;/span&gt; &lt;span class="n"&gt;clientConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LettuceClientConfiguration&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;builder&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;readFrom&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;ReadFrom&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ANY_REPLICA&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;LettuceConnectionFactory&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;masterReplicaConfig&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;clientConfig&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration connects Spring Boot to a primary and its replicas, preferring replica reads automatically.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠 Verifying Reads Are Hitting Replicas
&lt;/h2&gt;

&lt;p&gt;To confirm that reads hit replicas rather than the primary:&lt;/p&gt;

&lt;h3&gt;
  
  
  1️⃣ Monitor Replica Activity
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;redis-cli &lt;span class="nt"&gt;-p&lt;/span&gt; 6380 MONITOR
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Execute a read query and see it logged on the replica.&lt;/p&gt;

&lt;h3&gt;
  
  
  2️⃣ In Spring Boot
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Autowired&lt;/span&gt;
&lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;RedisTemplate&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Object&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;redisTemplate&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="nd"&gt;@Bean&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;CommandLineRunner&lt;/span&gt; &lt;span class="nf"&gt;testRedis&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;ValueOperations&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Object&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;ops&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;redisTemplate&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;opsForValue&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="n"&gt;ops&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;set&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"user:1"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Alice"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Write -&amp;gt; Master&lt;/span&gt;
        &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ops&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"user:1"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Read -&amp;gt; Replica&lt;/span&gt;
        &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;out&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;println&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Read value (replica preferred): "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;};&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  ⚠️ Limitations of Replicas Alone
&lt;/h2&gt;

&lt;p&gt;However, while replication improves resilience, it doesn’t guarantee full HA on its own.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No automatic failover:&lt;/strong&gt; Promotion must be done manually without Sentinel or Cluster.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Asynchronous replication:&lt;/strong&gt; Recent writes might be lost if the primary fails before syncing.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Single control point:&lt;/strong&gt; The primary remains the bottleneck for writes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These gaps highlight why replication is essential but insufficient for full HA in production environments.&lt;/p&gt;




&lt;h2&gt;
  
  
  ✅ Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Redis replication is a &lt;strong&gt;simple yet powerful&lt;/strong&gt; way to protect against single points of failure, scale reads, and prepare for failover. Its nature—one primary continuously mirrored by one or more replicas—ensures that your data is &lt;strong&gt;redundant, accessible, and performance-optimized&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why use replicas?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep a live backup ready for emergencies.&lt;/li&gt;
&lt;li&gt;Reduce read load on the primary.&lt;/li&gt;
&lt;li&gt;Improve latency with geographically placed replicas.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key takeaway&lt;/strong&gt;: Replication is your &lt;strong&gt;first step&lt;/strong&gt; toward high availability. Pair it with &lt;strong&gt;Sentinel&lt;/strong&gt; for automatic failover or &lt;strong&gt;Cluster&lt;/strong&gt; for sharding to achieve a &lt;strong&gt;production-grade, fault-tolerant Redis deployment&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠 Demo Project for Readers
&lt;/h2&gt;

&lt;p&gt;I have created a &lt;strong&gt;demo project&lt;/strong&gt; that showcases practical usage of Redis replica.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project Includes:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker&lt;/strong&gt;: Pre-configured Redis master &amp;amp; replicas using &lt;code&gt;docker-compose&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spring Boot&lt;/strong&gt;: Example backend service demonstrating Redis read/write splitting.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔗 Access the Project
&lt;/h3&gt;

&lt;p&gt;You can clone or explore the project from my repository :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/arata-x/redis-ha.git
cd redis-ha
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://redis.io/docs/latest/operate/oss_and_stack/management/replication/" rel="noopener noreferrer"&gt;Official Replication Documentation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.spring.io/spring-data/redis/docs/current/reference/html/" rel="noopener noreferrer"&gt;Spring Data Redis Reference Guide&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>redis</category>
      <category>springboot</category>
      <category>database</category>
      <category>replication</category>
    </item>
  </channel>
</rss>
