<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Shivam Kumar</title>
    <description>The latest articles on Forem by Shivam Kumar (@bharadwajshivam).</description>
    <link>https://forem.com/bharadwajshivam</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/bharadwajshivam"/>
    <language>en</language>
    <item>
      <title>Kube-Proxy and CNI: The Backbone of Kubernetes Networking</title>
      <dc:creator>Shivam Kumar</dc:creator>
      <pubDate>Sat, 03 Jan 2026 18:55:33 +0000</pubDate>
      <link>https://forem.com/bharadwajshivam/kube-proxy-and-cni-the-backbone-of-kubernetes-networking-1pbk</link>
      <guid>https://forem.com/bharadwajshivam/kube-proxy-and-cni-the-backbone-of-kubernetes-networking-1pbk</guid>
      <description>&lt;p&gt;Kubernetes networking looks simple — every Pod gets an IP, and Services route traffic automatically.&lt;/p&gt;

&lt;p&gt;Behind this simplicity are two core components that make everything work: CNI and kube-proxy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kube-Proxy
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1j1bkxmjckfss5t725rl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1j1bkxmjckfss5t725rl.png" alt=" " width="800" height="512"&gt;&lt;/a&gt;&lt;br&gt;
Kube-Proxy is a network proxy that runs on each node in the cluster. When a service is created, it sets up the necessary network rules to route incoming requests to one of the pods backing the service. This can involve IP tables, IPVS (IP Virtual Server), or other networking mechanisms depending on the configuration.&lt;/p&gt;

&lt;p&gt;The major work of Kube-Proxy internally involves tasks like-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kube Proxy makes services to actually route by handling routing to the pods.&lt;/li&gt;
&lt;li&gt;Kube Proxy maintains the Service IP behaves like a stable endpoint, even the Pods are ephemeral. Kube-proxy translates the Service IP to a Pod IP using iptables or IPVS rules.&lt;/li&gt;
&lt;li&gt;When a new service or endpoint is created, Kube Proxy sets up routing rules on the node. If a pod backing the service is deleted or a new pod is added, Kube-proxy updates the rules to maintain the service’s availability.&lt;/li&gt;
&lt;li&gt;Kube-Proxy enables service-level traffic distribution by kernel networking rules (iptables or IPVS), allowing traffic sent to a Service IP to be forwarded to one of the backing Pods.&lt;/li&gt;
&lt;li&gt;Without kube-proxy, Service IPs would exist but would not route traffic to Pods. kube-proxy wires Service IPs and ports to actual Pod endpoints.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Container Network Interface (CNI)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvj77mtdo6t2yi42cu46z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvj77mtdo6t2yi42cu46z.png" alt=" " width="800" height="654"&gt;&lt;/a&gt;&lt;br&gt;
In Kubernetes, CNI is the standard way to provide networking to pods. The main purpose of CNI is to allow different networking plugins to be used with container runtimes. This allows Kubernetes to be flexible and work with different networking solutions&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Assigns a unique IP to each Pod so that it can communicate within the cluster.&lt;/li&gt;
&lt;li&gt;It will create and configure the Pod's network interface veth pair, then attach it inside the node's network namespace.&lt;/li&gt;
&lt;li&gt;It sets up routing rules so that Pods can communicate with other Pods across nodes.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>containers</category>
      <category>sre</category>
    </item>
    <item>
      <title>PostgreSQL WAL: The Backbone of Reliable and Scalable Databases</title>
      <dc:creator>Shivam Kumar</dc:creator>
      <pubDate>Sat, 11 Oct 2025 19:43:49 +0000</pubDate>
      <link>https://forem.com/bharadwajshivam/postgresql-wal-the-backbone-of-reliable-and-scalable-databases-nll</link>
      <guid>https://forem.com/bharadwajshivam/postgresql-wal-the-backbone-of-reliable-and-scalable-databases-nll</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F40hyfot7vgfqzocmhd9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F40hyfot7vgfqzocmhd9w.png" alt=" " width="610" height="280"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Ever wondered how PostgreSQL survives crashes without losing a byte of data?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Databases can crash, servers can fail, but losing data is not an option. PostgreSQL achieves this reliability with its WAL(write-ahead log) mechanism.&lt;/p&gt;

&lt;p&gt;WAL records every changes before it's applied to the database, ensuring consistency and enabling quick recovery. It is the log of changes made to the database cluster which is replayed either as part of the database recovery process when a database isn’t shutdown correctly (such as when a crash occurs), or is used by standbys to replay the changes to replicate the database. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Working of WAL in 4 Steps-&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Step 1: When a change (INSERT, UPDATE, DELETE) occurs, PostgreSQL writes it to the WAL first.&lt;/li&gt;
&lt;li&gt;Step 2: Only after the WAL is safely written, the change is applied to the main database files.&lt;/li&gt;
&lt;li&gt;Step 3: If the database crashes, PostgreSQL can replay the WAL to recover all committed changes.&lt;/li&gt;
&lt;li&gt;Step 4: WAL ensures data consistency, durability, and quick crash recovery.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Experimenting with WAL&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Check your PostgreSQL data directory&lt;br&gt;
&lt;code&gt;SHOW data_directory;&lt;/code&gt; &lt;br&gt;
I installed it using brew so its &lt;strong&gt;/opt/homebrew/var/postgresql@14&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to the WAL directory&lt;br&gt;
&lt;code&gt;cd /opt/homebrew/var/postgresql@14/pg_wal&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9l48wj4ys6x0npknyq62.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9l48wj4ys6x0npknyq62.png" alt=" " width="712" height="68"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In pg_wal, files like 000000010000000000000001 are binary WAL segments that record all database changes before they are applied. The archive_status folder shows which WAL files are ready to be saved or backed up.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get connected to your postgresql and run simple queries to create table and insert some data in it.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE wal_demo (
    id SERIAL PRIMARY KEY,
    name TEXT
);

INSERT INTO wal_demo(name) VALUES ('Shivam'), ('PostgreSQL'), ('WAL TEST');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Watch live WAL logs in a separate terminal: &lt;br&gt;
&lt;code&gt;watch -n 1 "pg_waldump 000000010000000000000001 | tail -n 20"&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs7bhy9u16gg9eajjpycm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs7bhy9u16gg9eajjpycm.png" alt=" " width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The content is binary and not human-readable, but every query you run is logged in WAL, ensuring no data is lost if the database crashes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Make WAL human-readable (Logical WAL)&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;To see the changes in WAL into Readable format we would need to make some changes. From your path &lt;strong&gt;/opt/homebrew/var/postgresql@14&lt;/strong&gt; we will change some content by &lt;code&gt;nano postgresql.conf&lt;/code&gt; and then uncomment &lt;strong&gt;wal_level&lt;/strong&gt; and make it to &lt;strong&gt;logical&lt;/strong&gt;, restart your postgresql. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now we will create a logical replication slot, its way to stream database modifications in a readable format instead of the raw binary WAL. Run the below query-&lt;br&gt;
&lt;code&gt;SELECT * FROM pg_create_logical_replication_slot('live_slot', 'test_decoding');&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;pg_create_logical_replication_slot is a PostgreSQL function that creates a logical replication slot.&lt;/li&gt;
&lt;li&gt;live_slot is the name of replication.&lt;/li&gt;
&lt;li&gt;test_decoding is built-in and outputs changes in a simple human-readable format.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Lets insert some data in our table created before.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INSERT INTO wal_demo(name) VALUES ('Shivam Live Test5');
INSERT INTO wal_demo(name) VALUES ('Shivam Live Test6');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;We will now verify if we can see our changes in readable form or not using the command &lt;code&gt;psql -U shivamkumar -d postgres -c "SELECT * FROM pg_logical_slot_get_changes('live_slot', NULL, NULL);"&lt;/code&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flan1vwmwettnoublbv83.png" alt=" " width="800" height="105"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You should now see the inserted rows in a readable format, confirming that WAL changes are being tracked and streamed successfully.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With WAL in action, PostgreSQL never misses a beat—your data is always safe and sound.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>database</category>
      <category>postgres</category>
      <category>software</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Creating custom kubernetes controller in Go</title>
      <dc:creator>Shivam Kumar</dc:creator>
      <pubDate>Sat, 24 Aug 2024 23:25:39 +0000</pubDate>
      <link>https://forem.com/bharadwajshivam/creating-custom-kubernetes-controller-in-go-4fa7</link>
      <guid>https://forem.com/bharadwajshivam/creating-custom-kubernetes-controller-in-go-4fa7</guid>
      <description>&lt;p&gt;Before Implementing custom controller in Go let's first understand what is Kubernetes Controller and Customer Resource Definition (CRD)&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes Controller
&lt;/h2&gt;

&lt;p&gt;A Kubernetes Controller is component of control plane that continuously monitors state of kubernetes cluster and takes action to ensure that the actual state of cluster matches the desired state.It makes changes attempting to move current state closer to desired state.&lt;/p&gt;

&lt;h2&gt;
  
  
  Customer Resource Definition (CRD)
&lt;/h2&gt;

&lt;p&gt;Custom Resource Definition (CRD) is a way to extend the Kubernetes API to create our own custom resources. These custom resources can represent any kind of object which we want to manage within our Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating own Custom Resource Definition (CRD)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: my-crds.com.shivam.kumar
spec:
  group: com.shivam.kumar
  names:
    kind: my-crd
    plural: my-crds
  scope: Namespaced
  versions:
    - name: v1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            apiVersion:
              type: string
            kind:
              type: string
            metadata:
              type: object
            spec:
              type: object
              properties:
                description:
                  type: string
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply this file using the kubectl command and when we see the available crds in our cluster we can see the crd which we created-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fy2dnnes8h35k66b8ak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fy2dnnes8h35k66b8ak.png" alt=" " width="800" height="55"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Custom Resource (CR)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: com.shivam.kumar/v1
kind: my-crd
metadata:
  name: my-custom-resource
spec:
  description: "My CRD instance"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply this file using the kubectl command&lt;/p&gt;

&lt;p&gt;Now let's move on to create own custom controller&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating custom kubernetes controller
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
    "context"
    "fmt"
    "path/filepath"

    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
    "k8s.io/apimachinery/pkg/runtime"
    "k8s.io/apimachinery/pkg/runtime/schema"
    "k8s.io/apimachinery/pkg/watch"
    "k8s.io/client-go/dynamic"
    "k8s.io/client-go/rest"
    "k8s.io/client-go/tools/cache"
    "k8s.io/client-go/tools/clientcmd"
    "k8s.io/client-go/util/homedir"
)

func main() {
    var kubeconfig string
    if home := homedir.HomeDir(); home != "" {
        kubeconfig = filepath.Join(home, ".kube", "config")
    }

    config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
    if err != nil {
        fmt.Println("Falling back to in-cluster config")
        config, err = rest.InClusterConfig()
        if err != nil {
            panic(err.Error())
        }
    }

    dynClient, err := dynamic.NewForConfig(config)
    if err != nil {
        panic(err.Error())
    }

    thefoothebar := schema.GroupVersionResource{Group: "com.shivam.kumar", Version: "v1", Resource: "my-crds"}

    informer := cache.NewSharedIndexInformer(
        &amp;amp;cache.ListWatch{
            ListFunc: func(options metav1.ListOptions) (runtime.Object, error) {
                return dynClient.Resource(thefoothebar).Namespace("").List(context.TODO(), options)
            },
            WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) {
                return dynClient.Resource(thefoothebar).Namespace("").Watch(context.TODO(), options)
            },
        },
        &amp;amp;unstructured.Unstructured{},
        0,
        cache.Indexers{},
    )

    informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
        AddFunc: func(obj interface{}) {
            fmt.Println("Add event detected:", obj)
        },
        UpdateFunc: func(oldObj, newObj interface{}) {
            fmt.Println("Update event detected:", newObj)
        },
        DeleteFunc: func(obj interface{}) {
            fmt.Println("Delete event detected:", obj)
        },
    })

    stop := make(chan struct{})
    defer close(stop)

    go informer.Run(stop)

    if !cache.WaitForCacheSync(stop, informer.HasSynced) {
        panic("Timeout waiting for cache sync")
    }

    fmt.Println("Custom Resource Controller started successfully")
    &amp;lt;-stop
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now when we build this Go Program and run it-&lt;br&gt;
&lt;code&gt;go build -o k8s-controller .&lt;/code&gt;&lt;br&gt;
&lt;code&gt;./k8s-controller&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now whenever we add, update or delete custom resource created above we get active logs of it in our terminal. so this means our controller is monitoring our CRD.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>docker</category>
      <category>go</category>
    </item>
  </channel>
</rss>
