<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Manish Pillai</title>
    <description>The latest articles on Forem by Manish Pillai (@pillaimanish).</description>
    <link>https://forem.com/pillaimanish</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/pillaimanish"/>
    <language>en</language>
    <item>
      <title>Why we need Ingress in Kubernetes?</title>
      <dc:creator>Manish Pillai</dc:creator>
      <pubDate>Sun, 05 Apr 2026 16:20:25 +0000</pubDate>
      <link>https://forem.com/pillaimanish/why-we-need-ingress-in-kubernetes-5a5i</link>
      <guid>https://forem.com/pillaimanish/why-we-need-ingress-in-kubernetes-5a5i</guid>
      <description>&lt;p&gt;When you want to expose your service/pod outside your cluster, you can use &lt;strong&gt;&lt;code&gt;NodePort&lt;/code&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;code&gt;LoadBalancer&lt;/code&gt;&lt;/strong&gt; services. They work for a "Hello World," but has some drawbacks.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem with the "Standard" way
&lt;/h3&gt;

&lt;p&gt;Using a &lt;strong&gt;NodePort&lt;/strong&gt; means your users have to type &lt;code&gt;NodeIP:31054&lt;/code&gt;. It’s ugly, insecure, and if that specific Node goes down, your app is "dead" to the world.&lt;/p&gt;

&lt;p&gt;Using a &lt;strong&gt;LoadBalancer&lt;/strong&gt; service is better, but it has three big headaches:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Cost:&lt;/strong&gt; One Cloud LB per service = a massive bill.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;L4 vs L7:&lt;/strong&gt; Standard L4 LoadBalancers don’t understand URL paths like &lt;code&gt;/api&lt;/code&gt; and don’t provide advanced routing or centralized SSL handling.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Scaling:&lt;/strong&gt; Managing 50 different entry points for 50 microservices is a management nightmare.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  The Solution: Ingress
&lt;/h2&gt;

&lt;p&gt;Ingress is a single entry point that handles all the "traffic flow" work. &lt;/p&gt;

&lt;p&gt;While the &lt;strong&gt;Ingress Resource&lt;/strong&gt; is just a set of rules (YAML), the &lt;strong&gt;Ingress Controller&lt;/strong&gt; (like Nginx or Traefik) is the actual engine that moves the data.&lt;/p&gt;

&lt;h3&gt;
  
  
  How do we access Ingress from the Outside?
&lt;/h3&gt;

&lt;p&gt;The Ingress Controller itself is just another Pod in your cluster. To get traffic &lt;strong&gt;to&lt;/strong&gt; the Controller from the internet, we usually do one of two things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Option A:&lt;/strong&gt; Create &lt;strong&gt;one&lt;/strong&gt; single &lt;code&gt;Type: LoadBalancer&lt;/code&gt; service that points &lt;em&gt;only&lt;/em&gt; to the Ingress Controller. Now, one IP handles all your domains.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Option B:&lt;/strong&gt; Use a &lt;strong&gt;&lt;code&gt;NodePort&lt;/code&gt;&lt;/strong&gt; on the Ingress Controller and point your external DNS (like GoDaddy/Cloudflare) to your Node IPs.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Practical Example: Routing &amp;amp; SSL
&lt;/h2&gt;

&lt;p&gt;Here is how you configure an Ingress to handle different services and &lt;strong&gt;SSL termination&lt;/strong&gt; in one place:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main-gateway&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/rewrite-target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
    &lt;span class="na"&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;letsencrypt-prod"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;myapp.com&lt;/span&gt;
    &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp-tls-secret&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp.com&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/billing&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;billing-service&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main-web-service&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Pros
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Path-Based Routing:&lt;/strong&gt; You can send &lt;code&gt;/billing&lt;/code&gt; to the Billing Pod and &lt;code&gt;/&lt;/code&gt; to the Web Pod using the same IP.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Centralized SSL:&lt;/strong&gt; You handle HTTPS certificates at the Ingress level instead of inside every individual app.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost Efficiency:&lt;/strong&gt; You only pay for &lt;strong&gt;one&lt;/strong&gt; LoadBalancer, no matter how many services you have.&lt;/li&gt;
&lt;/ul&gt;




&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Entry:&lt;/strong&gt; You point one external LoadBalancer to your &lt;strong&gt;Ingress Controller&lt;/strong&gt; to get traffic into the cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Logic:&lt;/strong&gt; The &lt;strong&gt;Ingress Rules&lt;/strong&gt; then decide which internal Service should actually handle the request.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;In short: &lt;code&gt;LoadBalancers&lt;/code&gt; get traffic TO the cluster; &lt;code&gt;Ingress&lt;/code&gt; tells traffic WHERE to go inside.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloud</category>
      <category>devops</category>
      <category>networking</category>
    </item>
    <item>
      <title>How Kubernetes Resolves Service DNS</title>
      <dc:creator>Manish Pillai</dc:creator>
      <pubDate>Sun, 05 Apr 2026 10:44:34 +0000</pubDate>
      <link>https://forem.com/pillaimanish/how-kubernetes-resolves-service-dns-mn6</link>
      <guid>https://forem.com/pillaimanish/how-kubernetes-resolves-service-dns-mn6</guid>
      <description>&lt;p&gt;When you create a Service in Kubernetes, you get a stable ClusterIP. But let’s be honest—nobody wants to hardcode &lt;code&gt;10.96.0.10&lt;/code&gt; into their application code. We want to use names, like &lt;code&gt;auth-service&lt;/code&gt; or &lt;code&gt;payment-api&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;How does Kubernetes know that &lt;code&gt;payment-api&lt;/code&gt; actually means &lt;code&gt;10.96.0.10&lt;/code&gt;? That’s where &lt;strong&gt;CoreDNS&lt;/strong&gt; comes in.&lt;/p&gt;




&lt;h3&gt;
  
  
  What is CoreDNS?
&lt;/h3&gt;

&lt;p&gt;CoreDNS is a flexible, extensible DNS server that sits inside your cluster as a standard &lt;strong&gt;Deployment&lt;/strong&gt;. It usually has two or more replicas for high availability.&lt;/p&gt;

&lt;p&gt;Its job is simple: &lt;strong&gt;Watch and Cache.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It watches the &lt;strong&gt;Kubernetes API&lt;/strong&gt; for any new Services or EndpointSlices.&lt;/li&gt;
&lt;li&gt;Whenever you create/update a Service, CoreDNS updates its &lt;strong&gt;internal memory cache&lt;/strong&gt; almost instantly.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  How your Pods "Know" to ask CoreDNS
&lt;/h3&gt;

&lt;p&gt;When a Pod is created, the &lt;strong&gt;Kubelet&lt;/strong&gt; injects a specific configuration into the Pod's &lt;code&gt;/etc/resolv.conf&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;If you exec into any Pod and run &lt;code&gt;cat /etc/resolv.conf&lt;/code&gt;, you’ll see something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nameserver 10.96.0.10  # This is the IP of the CoreDNS Service
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because of this file, every time your app tries to connect to a URL, it doesn't go to the public internet first—it asks &lt;strong&gt;CoreDNS&lt;/strong&gt; inside the cluster.&lt;/p&gt;




&lt;h3&gt;
  
  
  Scenarios
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Scenario A: Internal Query&lt;/strong&gt;&lt;br&gt;
You call &lt;code&gt;http://my-service&lt;/code&gt;. CoreDNS looks at its cache, finds the ClusterIP, and hands it back. Done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario B: External Query (e.g., &lt;a href="https://www.google.com" rel="noopener noreferrer"&gt;www.google.com&lt;/a&gt;)&lt;/strong&gt;&lt;br&gt;
CoreDNS checks its cache and says, "I don't know who that is." Instead of giving up, it forwards the request to the &lt;strong&gt;upstream DNS&lt;/strong&gt; (the default DNS configured on the worker node).&lt;/p&gt;


&lt;h3&gt;
  
  
  The Corefile (Configuration)
&lt;/h3&gt;

&lt;p&gt;The behavior of CoreDNS is defined in a &lt;strong&gt;ConfigMap&lt;/strong&gt; called the &lt;code&gt;Corefile&lt;/code&gt;. It’s mounted directly into the CoreDNS pods. It looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.:53 {
    errors
    health {
       lameduck 5s
    }
    ready
    kubernetes cluster.local in-addr.arpa ip6.arpa {
       pods insecure
       fallthrough in-addr.arpa ip6.arpa
       ttl 30
    }
    prometheus :9153
    forward . /etc/resolv.conf
    cache 30
    loop
    reload
    loadbalance
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key parts of this config:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubernetes:&lt;/strong&gt; This plugin tells CoreDNS to answer DNS queries based on IP addresses of Kubernetes pods and services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;forward . /etc/resolv.conf:&lt;/strong&gt; This is the "What if?" logic. If it's not a K8s service, send it to the node's DNS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;cache:&lt;/strong&gt; Keeps things fast so we don't hit the API server for every single request.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Kubelet&lt;/strong&gt; : It writes the CoreDNS IP into every Pod's /etc/resolv.conf so they know where to ask.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CoreDNS&lt;/strong&gt; : It watches the K8s API to map Service names to IPs and forwards everything else to the Node's DNS.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>cloud</category>
      <category>networking</category>
    </item>
    <item>
      <title>How Kubernetes Hands Out IPs</title>
      <dc:creator>Manish Pillai</dc:creator>
      <pubDate>Sat, 04 Apr 2026 05:59:48 +0000</pubDate>
      <link>https://forem.com/pillaimanish/how-kubernetes-actually-hands-out-ips-2bbe</link>
      <guid>https://forem.com/pillaimanish/how-kubernetes-actually-hands-out-ips-2bbe</guid>
      <description>&lt;p&gt;If you've ever looked at a Kubernetes cluster and wondered why Pods and Services get the IPs they do, you’ve probably bumped into the term &lt;strong&gt;CIDR&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;It sounds intimidating, but it’s actually just a clever way to keep the "address book" of your cluster organized. Let’s break it down like humans.&lt;/p&gt;




&lt;h2&gt;
  
  
  First off: What is CIDR?
&lt;/h2&gt;

&lt;p&gt;Think of CIDR as a way to define a &lt;strong&gt;territory&lt;/strong&gt; of IP addresses without having to list every single one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example: &lt;code&gt;10.244.0.0/16&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
Instead of saying, "I have 65,536 addresses from &lt;code&gt;10.244.0.0&lt;/code&gt; to &lt;code&gt;10.244.255.255&lt;/code&gt;," we just use that one little string.&lt;/p&gt;

&lt;p&gt;The number after the slash tells you how much of the address is "locked" (the network) and how much is "free real estate" (the hosts).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;/16&lt;/strong&gt; = A massive range (Lots of room for nodes).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;/24&lt;/strong&gt; = A smaller chunk (Perfect for a single node).&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why this matters in Kubernetes
&lt;/h2&gt;

&lt;p&gt;When you spin up a cluster, it gets a big "bucket" of IPs (a &lt;strong&gt;/16&lt;/strong&gt;). But a cluster has multiple nodes. To keep things from getting messy, Kubernetes splits that big bucket into smaller bowls (&lt;strong&gt;/24&lt;/strong&gt;) for each node.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Hierarchy:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cluster Level:&lt;/strong&gt; &lt;code&gt;10.244.0.0/16&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Node 1:&lt;/strong&gt; &lt;code&gt;10.244.1.0/24&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node 2:&lt;/strong&gt; &lt;code&gt;10.244.2.0/24&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Two Types of CIDR
&lt;/h2&gt;

&lt;p&gt;A healthy cluster usually manages two separate "buckets":&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Pod CIDR:&lt;/strong&gt; For your actual containers.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Service CIDR:&lt;/strong&gt; For the stable entry points (Services).&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  How Pod IPs are assigned
&lt;/h2&gt;

&lt;p&gt;When a node joins your cluster, the &lt;strong&gt;kube-controller-manager&lt;/strong&gt; says, "Welcome! Here is your personal slice of the IP pie."&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;The Split:&lt;/strong&gt; The controller assigns a small range (like a &lt;code&gt;/24&lt;/code&gt;) to that specific node.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;The Creation:&lt;/strong&gt; You deploy a Pod.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;The CNI (Calico/Flannel):&lt;/strong&gt; The CNI plugin looks at the node's assigned range and grabs a free IP.

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Result:&lt;/em&gt; &lt;code&gt;10.244.1.5&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Because every node has its own unique range, Pod IPs &lt;strong&gt;never&lt;/strong&gt; overlap. No collisions.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  How Service IPs are assigned
&lt;/h2&gt;

&lt;p&gt;Service IPs are a bit different. They don't care about nodes because &lt;strong&gt;Services are virtual.&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;The Request:&lt;/strong&gt; You create a Service.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;The API Server:&lt;/strong&gt; The &lt;code&gt;kube-apiserver&lt;/code&gt; looks at the global &lt;strong&gt;Service CIDR&lt;/strong&gt; bucket.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;The Assignment:&lt;/strong&gt; It picks an IP (e.g., &lt;code&gt;10.96.0.10&lt;/code&gt;) and stamps it on the Service.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;You won't find this IP on any actual hardware interface. It lives purely in the cluster's iptables/IPVS.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The "Why": Why different components?
&lt;/h2&gt;

&lt;p&gt;This used to confuse me. Why does the &lt;strong&gt;Controller Manager&lt;/strong&gt; handle Pods, but the &lt;strong&gt;API Server&lt;/strong&gt; handles Services?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pod CIDRs are dynamic:&lt;/strong&gt; Nodes come and go. We need a "&lt;code&gt;Controller&lt;/code&gt;" to constantly watch the cluster and hand out ranges to new nodes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service CIDRs are static:&lt;/strong&gt; It’s just a flat pool of IPs. The API Server can just grab the next available one from the list while it's processing your YAML.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pod IP:&lt;/strong&gt; Assigned by &lt;strong&gt;CNI&lt;/strong&gt; from the &lt;strong&gt;Node’s&lt;/strong&gt; specific slice.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service IP:&lt;/strong&gt; Assigned by the &lt;strong&gt;API Server&lt;/strong&gt; from the &lt;strong&gt;Cluster’s&lt;/strong&gt; global pool.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>programming</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How Kubernetes Maps Service IP to Pods</title>
      <dc:creator>Manish Pillai</dc:creator>
      <pubDate>Fri, 03 Apr 2026 07:05:44 +0000</pubDate>
      <link>https://forem.com/pillaimanish/how-kubernetes-services-actually-route-traffic-2bja</link>
      <guid>https://forem.com/pillaimanish/how-kubernetes-services-actually-route-traffic-2bja</guid>
      <description>&lt;p&gt;When you first learn Kubernetes, you hear: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“Pods talk to each other using Services.”&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;It sounds simple, but the Service IP doesn't actually exist on any physical interface. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: Pods are Ephemeral
&lt;/h2&gt;

&lt;p&gt;Pods are temporary. If you delete a Pod and it is recreated, it gets a &lt;strong&gt;new IP address&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend calls Backend via Pod IP:&lt;/strong&gt; &lt;code&gt;10.244.1.5&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend restarts:&lt;/strong&gt; The old IP no longer exists.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Result:&lt;/strong&gt; Connection failure. ❌&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Solution: Services
&lt;/h2&gt;

&lt;p&gt;Kubernetes provides &lt;strong&gt;Services&lt;/strong&gt; to act as a stable entry point. They offer:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Stable IP&lt;/strong&gt; (ClusterIP)&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;DNS name&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Load balancing&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Instead of calling a specific Pod, the frontend calls the &lt;code&gt;backend-service&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Catch: The "Ghost" IP
&lt;/h2&gt;

&lt;p&gt;A Service IP (e.g., &lt;code&gt;10.96.0.10&lt;/code&gt;) is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Not assigned to any Pod.&lt;/li&gt;
&lt;li&gt;Not owned by any Node.&lt;/li&gt;
&lt;li&gt;Not visible in &lt;code&gt;ifconfig&lt;/code&gt; or &lt;code&gt;ip addr&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How it Works: Data vs. Routing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Finding the Pods
&lt;/h3&gt;

&lt;p&gt;When you define a Service with a &lt;code&gt;selector&lt;/code&gt;, Kubernetes creates an &lt;strong&gt;EndpointSlice&lt;/strong&gt;. This is essentially a list of the actual Pod IPs behind that Service.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This is just data stored in the API Server. It doesn't route anything yet.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  2. &lt;code&gt;kube-proxy&lt;/code&gt; Steps In
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;kube-proxy&lt;/code&gt; runs on every node as a &lt;strong&gt;DaemonSet&lt;/strong&gt;. Its job is to watch the API Server for new Services and Endpoints and translate them into local networking rules.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Creating the Rules (iptables)
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;kube-proxy&lt;/code&gt; uses &lt;strong&gt;iptables&lt;/strong&gt; (a Linux kernel feature) to intercept packets. It sets up a chain of rules:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Match:&lt;/strong&gt; "Is this packet going to Service IP &lt;code&gt;10.96.0.10&lt;/code&gt;?"&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Select:&lt;/strong&gt; "Pick one of the available Pod IPs from the list."&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Rewrite (DNAT):&lt;/strong&gt; Change the destination IP from the Service IP to the chosen Pod IP.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Request Flow
&lt;/h2&gt;

&lt;p&gt;When a request is sent:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Client/Pod&lt;/strong&gt; sends a packet to the &lt;strong&gt;Service IP&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Linux Kernel&lt;/strong&gt; checks the &lt;strong&gt;iptables&lt;/strong&gt; rules (installed by &lt;code&gt;kube-proxy&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;DNAT&lt;/strong&gt; happens: The destination is rewritten to a &lt;strong&gt;Pod IP&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; The packet is routed to the actual &lt;strong&gt;Pod&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;kube-proxy&lt;/code&gt; Modes
&lt;/h2&gt;

&lt;p&gt;While the concept remains the same, the efficiency of how rules are matched varies:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;th&gt;Performance&lt;/th&gt;
&lt;th&gt;Scaling&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;iptables&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Linux rule list&lt;/td&gt;
&lt;td&gt;Standard&lt;/td&gt;
&lt;td&gt;Slower as you add thousands of services&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;IPVS&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Linux Load Balancer&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;Very fast; uses hash tables for lookups&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Userspace&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;kube-proxy&lt;/code&gt; process&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Slow (Old, no longer used)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Key Takeaway
&lt;/h2&gt;

&lt;p&gt;The name &lt;strong&gt;&lt;code&gt;kube-proxy&lt;/code&gt;&lt;/strong&gt; is confusing because, in modern Kubernetes, it is &lt;strong&gt;not&lt;/strong&gt; a proxy. It doesn't sit in the middle of your traffic. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;kube-proxy&lt;/code&gt;&lt;/strong&gt; runs on each node and manages the networking rules on each node to enable Service discovery and load balancing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Linux Kernel&lt;/strong&gt; is the &lt;strong&gt;Data Plane&lt;/strong&gt; that does the actual heavy lifting of routing packets (using &lt;code&gt;iptables&lt;/code&gt; or &lt;code&gt;IPVS&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Remember:&lt;/strong&gt; &lt;code&gt;kube-proxy&lt;/code&gt; sets things up; the kernel does the real work.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>programming</category>
      <category>devops</category>
      <category>networking</category>
    </item>
    <item>
      <title>Understanding HOTP and TOTP in Two-Factor Authentication</title>
      <dc:creator>Manish Pillai</dc:creator>
      <pubDate>Sun, 28 Sep 2025 17:57:37 +0000</pubDate>
      <link>https://forem.com/pillaimanish/understanding-hotp-and-totp-in-two-factor-authentication-22k3</link>
      <guid>https://forem.com/pillaimanish/understanding-hotp-and-totp-in-two-factor-authentication-22k3</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi70ut3m531qct8ndlt6q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi70ut3m531qct8ndlt6q.png" alt="Intro Image" width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In most websites today, security doesn’t stop at just a username and password. To add an extra layer of protection against phishing and unauthorized access, &lt;strong&gt;Two-Factor Authentication (2FA)&lt;/strong&gt; is commonly used.&lt;/p&gt;

&lt;p&gt;There are different ways to implement 2FA:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SMS&lt;/strong&gt; OTPs sent to your mobile&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Push notifications&lt;/strong&gt; (e.g., GitHub’s mobile approval)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authenticator apps&lt;/strong&gt; like Google Authenticator, Microsoft Authenticator, or Authy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Among these, authenticator apps are the most widely used. They generate one-time passwords (OTPs) that refresh either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On demand (HOTP – Hash-based OTP), or&lt;/li&gt;
&lt;li&gt;After a fixed period (TOTP – Time-based OTP).&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/T0fy5omBbKc"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;




&lt;h2&gt;
  
  
  Example: QR Code + Authenticator App
&lt;/h2&gt;

&lt;p&gt;When you enable 2FA on a website, you’re usually presented with a QR code to scan in your authenticator app:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcmcqzip8dv3u75hkpfrv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcmcqzip8dv3u75hkpfrv.png" alt="Image QR Code &amp;amp; Authentication App example" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The QR code encodes a special URI in the following format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;otpauth&lt;/span&gt;&lt;span class="ss"&gt;:/&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;totp&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="no"&gt;Issuer&lt;/span&gt;&lt;span class="ss"&gt;:Account?&lt;/span&gt;&lt;span class="n"&gt;secret&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="no"&gt;BASE32SECRET&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;issuer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="no"&gt;IssuerName&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;algorithm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="no"&gt;SHA1&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;digits&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;period&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;otpauth://totp&lt;/strong&gt; → scheme (could also be &lt;code&gt;hotp&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Issuer:Account&lt;/strong&gt; → helps you identify which service/account the OTP belongs to&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;secret&lt;/strong&gt; → Base32 encoded secret key shared between server and your app&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;algorithm&lt;/strong&gt; → hashing algorithm (usually &lt;code&gt;SHA-1&lt;/code&gt;, sometimes &lt;code&gt;SHA-256/512&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;digits&lt;/strong&gt; → OTP length (6 or 8)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;period&lt;/strong&gt; → validity window in seconds (default: &lt;code&gt;30s&lt;/code&gt; for TOTP)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your authenticator app extracts these details and starts generating OTPs automatically.&lt;/p&gt;




&lt;h2&gt;
  
  
  HOTP – Hash-based One-Time Password
&lt;/h2&gt;

&lt;p&gt;HOTP is counter-based. Both the server and your authenticator app maintain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A secret key (shared during setup)&lt;/li&gt;
&lt;li&gt;A counter (incremented each time an OTP is generated)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Flow diagram:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0va5akhj4kvkndmpt8t9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0va5akhj4kvkndmpt8t9.png" alt="HOTP - Flow Diagram" width="800" height="1322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Combine the secret key and counter.&lt;/li&gt;
&lt;li&gt;Hash them using the chosen algorithm (usually SHA-1), producing a &lt;strong&gt;20-byte HMAC code&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Apply dynamic truncation to extract a &lt;strong&gt;6–8 digit OTP&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;The generated OTP is sent to the server, which independently performs the same calculation to verify it.
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;Note: Each byte is 8 bits, so a 20-byte HMAC gives a 160-bit code.&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;




&lt;h2&gt;
  
  
  TOTP – Time-based One-Time Password
&lt;/h2&gt;

&lt;p&gt;TOTP is time-based. Instead of a counter, it uses the current timestamp:&lt;br&gt;
0 The server and client compute a time step (e.g., floor(currentUnixTime / period))&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The time step is treated as the counter in the HOTP algorithm&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Flow diagram:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9urs7v1zhljufhziq8d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9urs7v1zhljufhziq8d.png" alt="TOTP - Flow Diagram" width="800" height="1094"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Compute the time step &lt;strong&gt;C = floor(currentUnixTime / period)&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Combine the &lt;strong&gt;secret key&lt;/strong&gt; and &lt;strong&gt;time step&lt;/strong&gt;, then &lt;strong&gt;hash&lt;/strong&gt; with HMAC.&lt;/li&gt;
&lt;li&gt;Apply &lt;strong&gt;dynamic truncation&lt;/strong&gt; to get the OTP.&lt;/li&gt;
&lt;li&gt;OTP is &lt;strong&gt;valid&lt;/strong&gt; for the selected &lt;strong&gt;time period&lt;/strong&gt; (e.g., 30 seconds).
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;This allows both server and client to generate the same OTP independently, as long as their clocks are synchronized.&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;




&lt;h2&gt;
  
  
  Dynamic Truncation
&lt;/h2&gt;

&lt;p&gt;Dynamic truncation is a &lt;strong&gt;common step in both HOTP and TOTP&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1eymj628ntkwsiw4qp0q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1eymj628ntkwsiw4qp0q.png" alt="Dynamic Truncation" width="800" height="2000"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Take the last byte of the HMAC result and perform a bitwise AND with 0x0F (decimal 15) to calculate an offset.&lt;/li&gt;
&lt;li&gt;Select 4 bytes starting from the offset.&lt;/li&gt;
&lt;li&gt;Convert these 4 bytes to a 31-bit integer (ignore the sign bit).&lt;/li&gt;
&lt;li&gt;Apply modulo 10^d (where d is the number of digits, usually 6 or 8) to get the final OTP.
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;This ensures the OTP is a fixed-length numeric code, even though the HMAC is 20 bytes&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;




&lt;h2&gt;
  
  
  Differences: HOTP vs TOTP
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;HOTP&lt;/th&gt;
&lt;th&gt;TOTP&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Counter/Time&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Incremented counter&lt;/td&gt;
&lt;td&gt;Time step (current time / period)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Validity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Valid until used&lt;/td&gt;
&lt;td&gt;Valid only for the time period&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Sync Requirement&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Counter sync required&lt;/td&gt;
&lt;td&gt;Clock sync required&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Use Case&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Hardware tokens, banking apps&lt;/td&gt;
&lt;td&gt;Mobile authenticator apps, web apps&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Additional Notes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Always use SHA-1 or stronger algorithms (from server POV).&lt;/li&gt;
&lt;li&gt;For TOTP, if the client and server clocks drift, the OTP may fail.  - Many servers implement a window of 1–2 time steps to allow minor clock differences.&lt;/li&gt;
&lt;li&gt;HOTP is mostly used for hardware tokens, while TOTP is widely used in mobile apps.&lt;/li&gt;
&lt;li&gt;Avoid SMS-based OTPs where possible due to SIM swap attacks.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Check out &lt;a href="https://docs.google.com/presentation/d/1ZNxZSkTjuvqBV5M_7Xvza8Xp5Lw5Cz7izltNe8jUXOI/edit?usp=sharing" rel="noopener noreferrer"&gt;slide&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check out &lt;a href="https://www.youtube.com/watch?v=T0fy5omBbKc" rel="noopener noreferrer"&gt;video&lt;/a&gt;&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>programming</category>
      <category>security</category>
      <category>learning</category>
    </item>
    <item>
      <title>SSE - Server Sent Event</title>
      <dc:creator>Manish Pillai</dc:creator>
      <pubDate>Wed, 24 Sep 2025 03:36:03 +0000</pubDate>
      <link>https://forem.com/pillaimanish/sse-server-sent-event-4ca7</link>
      <guid>https://forem.com/pillaimanish/sse-server-sent-event-4ca7</guid>
      <description>&lt;p&gt;If you’ve ever wanted your server to &lt;strong&gt;push live updates&lt;/strong&gt; to a browser without the client constantly polling, &lt;strong&gt;Server-Sent Events (SSE)&lt;/strong&gt; are a simple and efficient solution. In this post, we’ll explore what SSE is, how to implement it in &lt;strong&gt;Golang&lt;/strong&gt;, and why the client side requires &lt;strong&gt;EventSource&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I’ve also made a &lt;strong&gt;short video demo&lt;/strong&gt; to show it in action, which you can check out below.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What is SSE?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SSE&lt;/strong&gt; is a standard that allows a server to &lt;strong&gt;send continuous updates over a single HTTP connection&lt;/strong&gt; to the client. Unlike WebSockets, SSE is unidirectional—the server sends data, but the client cannot push messages back on the same connection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Common use cases:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Live dashboards and monitoring systems.&lt;/li&gt;
&lt;li&gt;Chat notifications or social feed updates.&lt;/li&gt;
&lt;li&gt;Real-time logs or stock price tickers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Creating an SSE Server in Go&lt;/p&gt;

&lt;p&gt;Here’s how to implement a basic SSE server using Go:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
    "fmt"
    "net/http"
    "time"

    "github.com/gorilla/mux"
)

func main() {
    router := mux.NewRouter()
    server := http.Server{
        Addr:    ":8080",
        Handler: router,
    }

    router.HandleFunc("/sse", handleEvents).Methods("GET")

    fmt.Println("SSE server running on :8080")
    err := server.ListenAndServe()
    if err != nil {
        panic(err)
    }
}

func handleEvents(w http.ResponseWriter, r *http.Request) {
    w.Header().Set("Access-Control-Allow-Origin", "*")
    w.Header().Set("Content-Type", "text/event-stream")
    w.Header().Set("Cache-Control", "no-cache")
    w.Header().Set("Connection", "keep-alive")

    ticker := time.NewTicker(2 * time.Second)
    defer ticker.Stop()

    for {
        select {
        case &amp;lt;-ticker.C:
            fmt.Fprintf(w, "data: %s\n\n", time.Now().String())
            if f, ok := w.(http.Flusher); ok { f.Flush() }
        case &amp;lt;-r.Context().Done():
            return
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key points:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Content-Type: text/event-stream is mandatory.&lt;/li&gt;
&lt;li&gt;Each message must start with data: and end with a &lt;strong&gt;double newline \n\n.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;http.Flusher&lt;/code&gt; to push data immediately without buffering.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;The Client Side – EventSource&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SSE is designed for &lt;strong&gt;browsers&lt;/strong&gt;, which support a built-in &lt;strong&gt;EventSource&lt;/strong&gt; API. This makes it very simple to receive server events:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;script&amp;gt;
const es = new EventSource("http://localhost:8080/sse");

es.onopen = () =&amp;gt; console.log("SSE connected");
es.onmessage = e =&amp;gt; console.log("message:", e.data);
es.onerror = e =&amp;gt; console.error("SSE error", e);
&amp;lt;/script&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;EventSource automatically &lt;strong&gt;reconnects&lt;/strong&gt; if the connection drops.&lt;/li&gt;
&lt;li&gt;It parses &lt;strong&gt;data:&lt;/strong&gt; &lt;strong&gt;messages&lt;/strong&gt; sent by the server.&lt;/li&gt;
&lt;li&gt;This is why the client usually needs a browser or JS runtime that supports EventSource.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;SSE vs Sockets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While SSE is perfect for server → client streaming, for bidirectional communication where the client also sends messages to the server in real time, WebSockets are more suitable.&lt;/p&gt;

&lt;p&gt;SSE:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unidirectional (server → client)&lt;/li&gt;
&lt;li&gt;Works over HTTP/HTTPS&lt;/li&gt;
&lt;li&gt;Simpler to implement&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;WebSockets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bidirectional&lt;/li&gt;
&lt;li&gt;More complex but flexible&lt;/li&gt;
&lt;li&gt;Great for chat apps, multiplayer games, and live collaboration tools&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Quick Note on Streaming HTTP&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SSE is a type of streaming HTTP, where the server sends data in chunks without closing the connection. Streaming HTTP itself is not limited to browsers and can be used in server-to-server communication as well.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Check Out the Video Demo&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I’ve created a short video demonstrating how to implement SSE in Go, including the server code and how EventSource handles the data in a browser.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/tzNXBnMRGDc"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
SSE is a lightweight and efficient way to push live updates from servers to browsers. It’s simpler than WebSockets when you only need server-to-client communication, making it ideal for dashboards, logs, and notifications.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>go</category>
      <category>learning</category>
    </item>
    <item>
      <title>No Code Admin Panel Platform</title>
      <dc:creator>Manish Pillai</dc:creator>
      <pubDate>Mon, 22 Sep 2025 18:34:52 +0000</pubDate>
      <link>https://forem.com/pillaimanish/no-code-admin-panel-platform-5ac1</link>
      <guid>https://forem.com/pillaimanish/no-code-admin-panel-platform-5ac1</guid>
      <description>&lt;p&gt;𝐍𝐨-𝐂𝐨𝐝𝐞 𝐀𝐝𝐦𝐢𝐧 𝐏𝐚𝐧𝐞𝐥 𝐏𝐥𝐚𝐭𝐟𝐨𝐫𝐦&lt;/p&gt;

&lt;p&gt;𝐖𝐡𝐚𝐭 𝐢𝐭 𝐝𝐨𝐞𝐬: Automates the creation of admin panels using a simple drag-and-drop interface, eliminating the need to build them from scratch for each client.&lt;/p&gt;

&lt;p&gt;𝐁𝐚𝐜𝐤𝐞𝐧𝐝: Built entirely by me in GoLang.&lt;/p&gt;

&lt;p&gt;𝐅𝐫𝐨𝐧𝐭𝐞𝐧𝐝: The UI was generated with the help of AI.&lt;/p&gt;

&lt;p&gt;𝐊𝐞𝐲 𝐅𝐞𝐚𝐭𝐮𝐫𝐞𝐬:&lt;br&gt;
⇨ Creates tables with primary and foreign key relations.&lt;br&gt;
⇨ Manages users and their access requests.&lt;br&gt;
⇨ Supports CRUD (Create, Read, Update, Delete) permissions on data.&lt;/p&gt;

&lt;p&gt;𝐅𝐮𝐭𝐮𝐫𝐞 𝐏𝐥𝐚𝐧𝐬: I'm working on adding customizable role-based permissions and improving the UI. I would be happy, if anyone wants to collab on frontend.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/TqyDa30FaWg"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>automation</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Key Management Service in Kubernetes — Part 2</title>
      <dc:creator>Manish Pillai</dc:creator>
      <pubDate>Sun, 01 Jun 2025 17:55:26 +0000</pubDate>
      <link>https://forem.com/pillaimanish/key-management-service-in-kubernetes-part-2-2o9n</link>
      <guid>https://forem.com/pillaimanish/key-management-service-in-kubernetes-part-2-2o9n</guid>
      <description>&lt;p&gt;Welcome back to our series on Key Management Service (KMS) in Kubernetes! In &lt;a href="https://dev.to/pillaimanish/key-management-service-in-kubernetes-part-1-2apa"&gt;Part 1&lt;/a&gt;, we laid the groundwork; now, in Part 2, we're diving into the critical concept of encryption at rest.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Exactly is Encryption at Rest?
&lt;/h2&gt;

&lt;p&gt;Simply put, encryption at rest in Kubernetes refers to how the &lt;strong&gt;API server encrypts data before storing it in etcd&lt;/strong&gt;. Think of etcd as the brain of your Kubernetes cluster - it's where all your cluster's configuration data, state, and secrets live.&lt;/p&gt;

&lt;p&gt;By default, the Kubernetes API server stores resources in etcd as &lt;strong&gt;plain text&lt;/strong&gt;. This means if someone gains unauthorized access to your etcd, they can read all your sensitive data, including secrets, without any effort. This is a significant security risk.&lt;/p&gt;

&lt;p&gt;While encryption at rest applies to any Kubernetes resource, in this series, we'll continue to focus on &lt;strong&gt;Secrets&lt;/strong&gt; due to their inherently sensitive nature.&lt;/p&gt;

&lt;p&gt;The good news is Kubernetes provides a way to encrypt this data before it hits etcd. This is primarily done through the &lt;code&gt;--encryption-provider-config&lt;/code&gt; argument passed to the &lt;code&gt;kube-apiserver&lt;/code&gt; process, which points to a configuration file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing EncryptionConfiguration
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, the encryption at rest behavior is configured using an &lt;code&gt;EncryptionConfiguration&lt;/code&gt; resource. This powerful configuration allows you to specify which resources should be encrypted and using which encryption providers.&lt;/p&gt;

&lt;p&gt;Let's look at an example configuration to understand its structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# CAUTION: This is an example configuration and should NOT be used for production clusters without careful consideration.
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
  - resources:
      - secrets
      - configmaps
      - pandas.awesome.bears.example # An example custom resource API
    providers:
      # The 'identity' provider stores resources as plain text (no encryption).
      # If listed first, it means data is NOT encrypted.
      - identity: {}
      - aesgcm:
          keys:
            - name: key1
              secret: c2VjcmV0IGlzIHNlY3VyZQ== # Base64 encoded key
            - name: key2
              secret: dGhpcyBpcyBwYXNzd29yZA== # Base64 encoded key
      - aescbc:
          keys:
            - name: key1
              secret: c2VjcmV0IGlzIHNlY3VyZQ== # Base64 encoded key
            - name: key2
              secret: dGhpcyBpcyBwYXNzd29yZA== # Base64 encoded key
      - secretbox:
          keys:
            - name: key1
              secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY= # Base64 encoded key
  - resources:
      - events
    providers:
      - identity: {} # Do not encrypt Events
  - resources:
      - '*.apps' # Wildcard match (Kubernetes 1.27+)
    providers:
      - aescbc:
          keys:
          - name: key2
            secret: c2VjcmV0IGlzIHNlY3VyZSwgb3IgaXMgYXQ/Cg==
  - resources:
      - '*.*' # Wildcard match (Kubernetes 1.27+)
    providers:
      - aescbc:
          keys:
          - name: key3
            secret: c2VjcmV0IGlzIHNlY3VyZSwgSSB0aGluaw==
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key Takeaways from the &lt;code&gt;EncryptionConfiguration&lt;/code&gt;:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;resources&lt;/code&gt;&lt;/strong&gt;: This array specifies which Kubernetes resources you want to apply encryption to. You can target specific resources (e.g., &lt;code&gt;secrets&lt;/code&gt;), or use wildcard matches (e.g., &lt;code&gt;'*.apps'&lt;/code&gt;, &lt;code&gt;'*.*'&lt;/code&gt;) available in Kubernetes 1.27 and later.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;providers&lt;/code&gt;&lt;/strong&gt;: This is an ordered list of encryption providers. Kubernetes attempts to use the first provider in the list to encrypt new data. When decrypting, it tries providers in order until one succeeds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;identity&lt;/code&gt;&lt;/strong&gt;: {}: This provider means &lt;strong&gt;no encryption&lt;/strong&gt;; data is stored in plain text. If this is the first provider for a resource, it means new data for that resource will not be encrypted.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;aesgcm, aescbc, secretbox&lt;/code&gt;&lt;/strong&gt;: These are different encryption algorithms. You define named keys (&lt;code&gt;base64&lt;/code&gt; encoded) for each.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Demo: Encrypting Secrets in Minikube
&lt;/h2&gt;

&lt;p&gt;Let's walk through a practical example using a Minikube cluster to see encryption at rest in action.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Start Your Minikube Cluster:&lt;/strong&gt;&lt;br&gt;
Make sure your Minikube cluster is up and running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Create the &lt;code&gt;EncryptionConfiguration&lt;/code&gt; File:&lt;/strong&gt;&lt;br&gt;
Create a file named &lt;code&gt;encryption-conf.yaml&lt;/code&gt; with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
  - resources:
      - secrets # We are only encrypting secrets for this demo
    providers:
      - aescbc: # Using AES-CBC encryption
          keys:
            - name: key1
              # IMPORTANT: This is a randomly generated key.
              # Use a strong, unique, and base64-encoded key in production.
              secret: lvAp17Ae2o/yTdxz2qyC6zjVzuS+sBdhkwCccgsSsUg=
      - identity: {} # Fallback to identity (plaintext) if AES-CBC fails, or for older data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Understanding the Demo Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We're specifically targeting &lt;code&gt;secrets&lt;/code&gt; for encryption.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We're using the &lt;code&gt;aescbc&lt;/code&gt; provider with a single key named key1. &lt;strong&gt;Remember to use a truly random and strong key for production environments!&lt;/strong&gt; The &lt;code&gt;secret&lt;/code&gt; value needs to be base64 encoded.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;identity: {}&lt;/code&gt; provider is included as a fallback. This is crucial for smooth rotation and decryption of older data, or if the primary encryption provider encounters an issue.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Configure the API Server to Use the Encryption Configuration:&lt;/strong&gt;&lt;br&gt;
Now, we need to tell the &lt;code&gt;kube-apiserver&lt;/code&gt; to use this configuration file. In Minikube, you can achieve this by modifying the API server's Pod definition.&lt;/p&gt;

&lt;p&gt;First, locate the &lt;code&gt;kube-apiserver&lt;/code&gt; pod definition (it's usually in &lt;code&gt;/etc/kubernetes/manifests/&lt;/code&gt; on the control plane node).&lt;/p&gt;

&lt;p&gt;Next, you need to modify the &lt;code&gt;kube-apiserver&lt;/code&gt; static pod manifest to include the &lt;code&gt;--encryption-provider-config&lt;/code&gt; argument and mount the directory containing your &lt;code&gt;encryption-conf.yaml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;Here's an example of how the &lt;code&gt;kube-apiserver&lt;/code&gt; pod manifest snippet would look after modification:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    # ... other kube-apiserver arguments
    - --encryption-provider-config=/etc/kubernetes/enc/encryption-conf.yaml # &amp;lt;--- Add this line
    volumeMounts:
    - mountPath: /etc/kubernetes/enc # &amp;lt;--- Mount path for your config file
      name: enc-vol
      readOnly: true
    # ... other volume mounts
  volumes:
  - hostPath:
      path: /etc/kubernetes/encryption # &amp;lt;--- Host path where your config file lives
      type: DirectoryOrCreate
    name: enc-vol
  # ... other volumes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: You will need to place your &lt;code&gt;encryption-conf.yaml&lt;/code&gt; file in the &lt;code&gt;/etc/kubernetes/encryption&lt;/code&gt; directory on your Minikube VM (or the control plane node for a full cluster) for the &lt;code&gt;hostPath&lt;/code&gt; volume mount to work correctly.&lt;/p&gt;

&lt;p&gt;After saving the changes, the &lt;code&gt;kube-apiserver&lt;/code&gt; pod will automatically restart, applying the new encryption configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Create a Secret:&lt;/strong&gt;&lt;br&gt;
Let's create a new secret in a test namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace test
kubectl -n=test create secret generic new-secret --from-literal=key1=supersecret
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5. Retrieve the Secret (as perceived by &lt;code&gt;kubectl&lt;/code&gt;):&lt;/strong&gt;&lt;br&gt;
Now, let's retrieve the secret using &lt;code&gt;kubectl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get secret new-secret -o yaml -n test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll get output similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
data:
  key1: c3VwZXJzZWNyZXQ= # Still base64 encoded!
kind: Secret
metadata:
  creationTimestamp: "2025-05-31T10:50:26Z"
  name: new-secret
  namespace: test
  resourceVersion: "6614"
  uid: a8655bde-be5f-4624-b905-61524de56ebe
type: Opaque
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that the &lt;code&gt;data&lt;/code&gt; field still shows a &lt;strong&gt;base64-encoded value&lt;/strong&gt;. This is crucial: &lt;code&gt;kubectl&lt;/code&gt; always displays secrets in their base64-encoded form. This output doesn't tell us whether the secret is encrypted at rest in etcd.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Verify Encryption in Etcd:&lt;/strong&gt;&lt;br&gt;
This is where the magic happens! We'll directly inspect how the secret is stored in etcd.&lt;br&gt;
First, exec into the &lt;code&gt;etcd-minikube&lt;/code&gt; pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl -n=kube-system exec -it etcd-minikube -- sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, use &lt;code&gt;etcdctl&lt;/code&gt; to retrieve the raw value of the secret from etcd:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ETCDCTL_API=3 etcdctl --cacert /var/lib/minikube/certs/etcd/ca.crt --cert /var/lib/minikube/certs/etcd/server.crt --key /var/lib/minikube/certs/etcd/server.key get /registry/secrets/test/new-secret
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll see output that looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/registry/secrets/test/new-secret
k8s:enc:aescbc:v1:key1:??|??&amp;lt;z@0-ki_&amp;lt;Q?
                                       jғ?[3Ox??t%??q??d??e??%?gy??ER8{s????
                                                                            ?r?UO%{?+C??h
                                                                                         ???w?II?; ??v??r????q?????t?YA?"??j?" ??f9?$FD?9T?.F?\?&amp;lt;???kc/Q?W0?
                                                                                                                                                                                   ?;\U??l?n???nW?^HlA?C?Ռ]=?U{j??|??pe?
?Z?Y?XY9??
          ?uuO?2??xK[޹??~7v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;This is the key difference!&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Without &lt;code&gt;EncryptionConfiguration&lt;/code&gt;, the data in etcd would be easily readable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;With &lt;code&gt;EncryptionConfiguration&lt;/code&gt; applied, you see an unreadable, garbled string prefixed with &lt;code&gt;k8s:enc:aescbc:v1:key1:&lt;/code&gt;. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This prefix tells you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;k8s:enc&lt;/code&gt;: This data is encrypted by Kubernetes.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;aescbc&lt;/code&gt;: The encryption algorithm used is AES-CBC.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;v1&lt;/code&gt;: The version of the encryption scheme.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;key1&lt;/code&gt;: The specific key (key1 from our &lt;code&gt;EncryptionConfiguration&lt;/code&gt;) that was used for encryption.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This confirms that your secret is now &lt;strong&gt;encrypted at rest&lt;/strong&gt; in etcd!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>programming</category>
      <category>cloud</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Key Management Service in Kubernetes — Part 1</title>
      <dc:creator>Manish Pillai</dc:creator>
      <pubDate>Sat, 17 May 2025 16:28:05 +0000</pubDate>
      <link>https://forem.com/pillaimanish/key-management-service-in-kubernetes-part-1-2apa</link>
      <guid>https://forem.com/pillaimanish/key-management-service-in-kubernetes-part-1-2apa</guid>
      <description>&lt;p&gt;Key Management Service (KMS) is a way to manage your secrets in a more secure manner. But before diving into KMS, let’s do a quick primer on Kubernetes Secrets.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Are Kubernetes Secrets?
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;Secret&lt;/strong&gt; is any sensitive information—such as a database password, an API token, or cloud credentials. In most applications, you separate such configuration data from the actual application logic.&lt;/p&gt;

&lt;p&gt;That’s where &lt;strong&gt;Kubernetes Secrets&lt;/strong&gt; come in. You can store confidential configuration as a separate Kubernetes resource called a &lt;strong&gt;Secret&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where Are Secrets Stored?
&lt;/h2&gt;

&lt;p&gt;Kubernetes stores secrets in &lt;strong&gt;etcd&lt;/strong&gt;, which is the key-value store used by the Kubernetes control plane. Unless encryption at rest is enabled, the secrets are stored in &lt;strong&gt;plaintext&lt;/strong&gt; in etcd—more on this later.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Create Kubernetes Secrets?
&lt;/h2&gt;

&lt;p&gt;Let’s walk through &lt;strong&gt;three ways&lt;/strong&gt; to create secrets in Kubernetes. For reference, I’m using an OpenShift cluster with the &lt;code&gt;oc&lt;/code&gt; CLI, but everything here applies to &lt;code&gt;kubectl&lt;/code&gt; as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a Namespace
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% oc new-project kms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  a. Creating a Secret from a File
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s1"&gt;'somepassword'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; password.txt
% oc create secret generic kms-file-secret &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;password.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the first line, we create a file &lt;code&gt;password.txt&lt;/code&gt; with the contents &lt;code&gt;somepassword&lt;/code&gt;. The second command creates a secret named &lt;code&gt;kms-file-secret&lt;/code&gt; by reading the contents of that file.&lt;/p&gt;

&lt;p&gt;To inspect the created secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% oc get secret kms-file-secret &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% oc get secret kms-file-secret  &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
apiVersion: v1
data:
  password: c29tZXBhc3N3b3Jk
kind: Secret
metadata:
  creationTimestamp: &lt;span class="s2"&gt;"2025-05-16T13:35:30Z"&lt;/span&gt;
  name: kms-file-secret
  namespace: kms
  resourceVersion: &lt;span class="s2"&gt;"807159"&lt;/span&gt;
  uid: 66691c51-4a6c-4a15-a6b1-1d2de6d8fff7
&lt;span class="nb"&gt;type&lt;/span&gt;: Opaque
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That string &lt;code&gt;c29tZXBhc3N3b3Jk&lt;/code&gt; is just a &lt;code&gt;base64&lt;/code&gt; encoded version of &lt;code&gt;somepassword&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  b. Creating a Secret from a Literal Value
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% oc create secret generic kms-literal-secret &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;somepassword
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a secret by passing the value directly as a literal string. The result is the same &lt;code&gt;base64&lt;/code&gt; encoded password in the data field:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% oc get secret kms-literal-secret &lt;span class="nt"&gt;-o&lt;/span&gt; yaml 
apiVersion: v1
data:
  password: c29tZXBhc3N3b3Jk
kind: Secret
metadata:
  creationTimestamp: &lt;span class="s2"&gt;"2025-05-16T14:05:34Z"&lt;/span&gt;
  name: kms-literal-secret
  namespace: kms
  resourceVersion: &lt;span class="s2"&gt;"819400"&lt;/span&gt;
  uid: d4ecd109-cd95-4afe-b2e8-77b2da3bcfca
&lt;span class="nb"&gt;type&lt;/span&gt;: Opaque
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  c. Creating a Secret Using a Manifest File
&lt;/h3&gt;

&lt;p&gt;You can also define secrets in YAML files using either the &lt;code&gt;data&lt;/code&gt; or &lt;code&gt;stringData&lt;/code&gt; fields.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;stringData&lt;/code&gt;: human-readable strings (Kubernetes will encode them to &lt;code&gt;base64&lt;/code&gt;)&lt;br&gt;
&lt;code&gt;data&lt;/code&gt;: requires values to already be &lt;code&gt;base64&lt;/code&gt; encoded&lt;/p&gt;

&lt;p&gt;Let’s go with the &lt;code&gt;data&lt;/code&gt; field here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s1"&gt;'somepassword'&lt;/span&gt; | &lt;span class="nb"&gt;base64
&lt;/span&gt;c29tZXBhc3N3b3Jk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now use that in a manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% oc apply &lt;span class="nt"&gt;-f&lt;/span&gt; - &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
apiVersion: v1
kind: Secret
metadata:
  name: kms-manifest-secret
  namespace: kms
type: Opaque
data:
  password: c29tZXBhc3N3b3Jk
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verifying:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% oc get secret kms-manifest-secret &lt;span class="nt"&gt;-o&lt;/span&gt; yaml    
apiVersion: v1
data:
  password: c29tZXBhc3N3b3Jk
kind: Secret
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"apiVersion"&lt;/span&gt;:&lt;span class="s2"&gt;"v1"&lt;/span&gt;,&lt;span class="s2"&gt;"data"&lt;/span&gt;:&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"password"&lt;/span&gt;:&lt;span class="s2"&gt;"c29tZXBhc3N3b3Jk"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;,&lt;span class="s2"&gt;"kind"&lt;/span&gt;:&lt;span class="s2"&gt;"Secret"&lt;/span&gt;,&lt;span class="s2"&gt;"metadata"&lt;/span&gt;:&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"annotations"&lt;/span&gt;:&lt;span class="o"&gt;{}&lt;/span&gt;,&lt;span class="s2"&gt;"name"&lt;/span&gt;:&lt;span class="s2"&gt;"kms-manifest-secret"&lt;/span&gt;,&lt;span class="s2"&gt;"namespace"&lt;/span&gt;:&lt;span class="s2"&gt;"kms"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;,&lt;span class="s2"&gt;"type"&lt;/span&gt;:&lt;span class="s2"&gt;"Opaque"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
  creationTimestamp: &lt;span class="s2"&gt;"2025-05-16T14:24:58Z"&lt;/span&gt;
  name: kms-manifest-secret
  namespace: kms
  resourceVersion: &lt;span class="s2"&gt;"827156"&lt;/span&gt;
  uid: ebb157dd-c493-47af-94c4-ba5d96ff9a29
&lt;span class="nb"&gt;type&lt;/span&gt;: Opaque
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Why Use Kubernetes Secrets?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;: Better than storing sensitive info in &lt;code&gt;ConfigMaps&lt;/code&gt; (which are plaintext).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Isolation&lt;/strong&gt;: Keeps secrets separate from your application code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Portability&lt;/strong&gt;: Easily consumed by multiple Pods using environment variables or mounted volumes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  But… What’s the Problem?
&lt;/h2&gt;

&lt;p&gt;At first glance, the secret looks like it’s encrypted due to its unreadable format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;data:
  password: c29tZXBhc3N3b3Jk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But that’s just base64 encoding—it’s easily reversible:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% oc get secret kms-file-secret &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.data.password}'&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;--decode&lt;/span&gt;
somepassword
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, if RBAC policies are misconfigured or someone has unauthorized read access to secrets, they can easily extract sensitive data. Not good.&lt;/p&gt;




&lt;h2&gt;
  
  
  So, What’s the Solution?
&lt;/h2&gt;

&lt;p&gt;To make secrets more secure, Kubernetes offers multiple enhancements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Encryption at rest&lt;/strong&gt;: Secrets are encrypted before being stored in etcd.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;KMS providers&lt;/strong&gt;: Use cloud-based or external Key Management Service to encrypt secrets (e.g., AWS KMS, Azure Key Vault, HashiCorp Vault).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secrets Store CSI Driver&lt;/strong&gt;: Mount secrets from external providers directly into Pods.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’ll dive into these solutions in Part 2 of the series.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>programming</category>
      <category>cloud</category>
      <category>opensource</category>
    </item>
    <item>
      <title>From VMs to Unikernels: The Evolution of Application Deployment</title>
      <dc:creator>Manish Pillai</dc:creator>
      <pubDate>Sun, 13 Apr 2025 18:52:38 +0000</pubDate>
      <link>https://forem.com/pillaimanish/from-vms-to-unikernels-the-evolution-of-application-deployment-3b38</link>
      <guid>https://forem.com/pillaimanish/from-vms-to-unikernels-the-evolution-of-application-deployment-3b38</guid>
      <description>&lt;p&gt;While many millennials and Gen-Z engineers haven't witnessed the full journey of &lt;strong&gt;computing&lt;/strong&gt; evolution, it's fascinating to explore how hardware and software abstraction has transformed over the years — especially with the rise of the cloud. &lt;/p&gt;

&lt;p&gt;One of the most impactful areas of this evolution is in how we deploy and run services: through &lt;strong&gt;virtualization&lt;/strong&gt;, &lt;strong&gt;containerization&lt;/strong&gt;, and more recently, &lt;strong&gt;unikernels&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fub16lwe2f0vrd5j6erjs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fub16lwe2f0vrd5j6erjs.png" alt="VM-Container-Unikerls" width="774" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Virtualization
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Virtualization&lt;/strong&gt; is the practice of abstracting physical hardware to run multiple isolated environments (virtual machines) on a single physical host. This is typically achieved through a &lt;strong&gt;hypervisor&lt;/strong&gt; — such as &lt;strong&gt;VMware ESXi, KVM, Xen, or Hyper-V&lt;/strong&gt; — which manages and allocates resources to each VM.&lt;/p&gt;

&lt;p&gt;Let’s say you have two services: Service A and Service B.&lt;/p&gt;

&lt;p&gt;To deploy them using virtualization:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You create two separate VMs.&lt;/li&gt;
&lt;li&gt;Each VM runs its own guest OS and kernel.&lt;/li&gt;
&lt;li&gt;You deploy one service per VM.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Great! Your services are isolated and running.&lt;/p&gt;

&lt;p&gt;But here's the catch...&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Each VM carries a full OS stack, leading to &lt;strong&gt;duplicate resource usage, longer boot times, and heavier system overhead&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Containerization
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Containerization&lt;/strong&gt; — popularized by &lt;strong&gt;Docker&lt;/strong&gt; — addressed these inefficiencies. Instead of virtualizing the entire hardware stack, containers &lt;strong&gt;share the host OS kernel&lt;/strong&gt;, isolating applications only at the process level using namespaces and cgroups.&lt;/p&gt;

&lt;p&gt;With containers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service A and B run as isolated containers.&lt;/li&gt;
&lt;li&gt;They share the same kernel but maintain isolated user spaces.&lt;/li&gt;
&lt;li&gt;Containers are lighter, faster to start, and use less memory and disk.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Boot times are significantly reduced, and resource utilization improves dramatically.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;However, containers still rely on the underlying host OS and a container runtime (e.g., Docker Engine or containerd), which introduces some &lt;strong&gt;attack surface&lt;/strong&gt; and &lt;strong&gt;runtime dependencies&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Unikernels
&lt;/h2&gt;

&lt;p&gt;Unikernels take minimalism to the next level.&lt;/p&gt;

&lt;p&gt;A unikernel is a &lt;strong&gt;single-purpose, single-address-space&lt;/strong&gt; image that includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Only the required parts of the OS.&lt;/li&gt;
&lt;li&gt;The application logic.&lt;/li&gt;
&lt;li&gt;No shells, no package managers, no unused ports.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;This makes them &lt;strong&gt;ultra-secure&lt;/strong&gt; (smaller attack surface), blazing fast to &lt;strong&gt;boot&lt;/strong&gt;, and extremely &lt;strong&gt;lightweight&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Inside Unikernels
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Traditional Operating System Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zybsncgy0rfm8rm9kh4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zybsncgy0rfm8rm9kh4.png" alt="Normal application stack" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In traditional operating systems (like Linux or Windows), the system is divided into &lt;strong&gt;two separate address spaces&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;User Space&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is where &lt;strong&gt;user applications&lt;/strong&gt; (like your web browser, database, or backend services) run.&lt;/li&gt;
&lt;li&gt;It includes &lt;strong&gt;user-level libraries&lt;/strong&gt; that the app uses to interact with the system.&lt;/li&gt;
&lt;li&gt;However, it doesn't have direct access to hardware or low-level resources for safety and security.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Kernel Space&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is the &lt;strong&gt;core of the OS&lt;/strong&gt; and has full control over the system's hardware.&lt;/li&gt;
&lt;li&gt;It includes essential components like the process scheduler, memory manager, networking stack, and device drivers.&lt;/li&gt;
&lt;li&gt;All user applications must make &lt;strong&gt;system calls&lt;/strong&gt; to request services from the kernel (like file I/O, network access, etc.).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;So essentially, applications (user space) &lt;strong&gt;rely on the kernel&lt;/strong&gt;(kernel space) to function. This separation enforces &lt;strong&gt;security&lt;/strong&gt;, &lt;strong&gt;stability&lt;/strong&gt;, and &lt;strong&gt;resource control&lt;/strong&gt; — but it also adds layers of &lt;strong&gt;abstraction&lt;/strong&gt; and &lt;strong&gt;overhead&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Unikernel Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F504qwumyt3wdoz065ko0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F504qwumyt3wdoz065ko0.png" alt="Unikernel application stack" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the unikernel model, things are drastically simplified.&lt;/p&gt;

&lt;p&gt;There is no separation between user space and kernel space — instead, both the application and only the essential parts of the OS are compiled into a &lt;strong&gt;single binary&lt;/strong&gt; that runs directly on hardware or a hypervisor.&lt;/p&gt;

&lt;p&gt;Here's what makes it unique:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Single-purpose&lt;/strong&gt;: A unikernel is built to run just one application — nothing more.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;App + OS as one&lt;/strong&gt;: The application is bundled with exactly the OS functionalities it needs (e.g., TCP/IP stack, filesystem, scheduler).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No shell, no package manager, no general-purpose OS features&lt;/strong&gt; — which also means:

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Reduced attack surface&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Fast boot time&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Smaller memory and disk footprint&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Because everything exists in a &lt;strong&gt;single address space&lt;/strong&gt;, function calls (even system-level ones) are just regular function calls — no costly system calls or context switches&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Unikernel Providers
&lt;/h2&gt;

&lt;p&gt;Here are a few notable unikernel implementations:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MirageOS&lt;/strong&gt; – Functional, OCaml-based unikernel.&lt;br&gt;
&lt;strong&gt;IncludeOS&lt;/strong&gt; – C++ based unikernel.&lt;br&gt;
&lt;strong&gt;OSv&lt;/strong&gt; – Designed to run single-application workloads (Java, etc.).&lt;br&gt;
&lt;strong&gt;NanoVMs&lt;/strong&gt; – Commercial unikernel platform for production deployment.&lt;/p&gt;




&lt;p&gt;Reference:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="http://unikernel.org/" rel="noopener noreferrer"&gt;http://unikernel.org/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.oreilly.com/library/view/unikernels/9781492042815/" rel="noopener noreferrer"&gt;https://www.oreilly.com/library/view/unikernels/9781492042815/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.electronicdesign.com/technologies/embedded/article/21250583/lynx-software-whats-the-difference-between-unikernels-and-operating-systems" rel="noopener noreferrer"&gt;https://www.electronicdesign.com/technologies/embedded/article/21250583/lynx-software-whats-the-difference-between-unikernels-and-operating-systems&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/cetic/unikernels" rel="noopener noreferrer"&gt;https://github.com/cetic/unikernels&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>programming</category>
      <category>webdev</category>
      <category>kubernetes</category>
      <category>virtualmachine</category>
    </item>
    <item>
      <title>My Learnings About Etcd</title>
      <dc:creator>Manish Pillai</dc:creator>
      <pubDate>Fri, 11 Apr 2025 03:35:41 +0000</pubDate>
      <link>https://forem.com/pillaimanish/my-learnings-about-etcd-2o6b</link>
      <guid>https://forem.com/pillaimanish/my-learnings-about-etcd-2o6b</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This is my first ever technical blog, so do correct if I am wrong, not so technically strong right now, just sharing my learnings.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Etcd&lt;/strong&gt; is a distributed key-value store, somewhat like Redis, but it operates quite differently under the hood (more on this later). &lt;br&gt;
It's implemented in &lt;strong&gt;Golang&lt;/strong&gt; and is fully &lt;a href="https://github.com/etcd-io/etcd" rel="noopener noreferrer"&gt;open-source&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;While etcd can be paired with many systems, its most prominent use case is in &lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, where it's a critical component of the control plane.&lt;/p&gt;




&lt;h2&gt;
  
  
  How is Etcd used in Kubernetes?
&lt;/h2&gt;

&lt;p&gt;If you're familiar with Kubernetes architecture, you know it consists of &lt;strong&gt;control plane&lt;/strong&gt; components and &lt;strong&gt;worker nodes&lt;/strong&gt;. The control plane is responsible for managing the overall cluster state - including scheduling, maintaining desired state, responding to cluster events, and more.&lt;/p&gt;

&lt;p&gt;But where does Kubernetes store all of its metadata? Things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pod definitions&lt;/li&gt;
&lt;li&gt;Deployment states&lt;/li&gt;
&lt;li&gt;Configuration data&lt;/li&gt;
&lt;li&gt;Secrets and ConfigMaps&lt;/li&gt;
&lt;li&gt;Cluster state&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's where &lt;strong&gt;etcd&lt;/strong&gt; comes into the picture.&lt;/p&gt;

&lt;p&gt;All cluster data is stored in etcd in a &lt;strong&gt;key-value&lt;/strong&gt; format. Whenever the &lt;strong&gt;kube-api-server&lt;/strong&gt; needs to fetch or persist cluster state, it communicates directly with etcd.&lt;/p&gt;

&lt;p&gt;Etcd can either be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Embedded&lt;/strong&gt; as part of the control plane (commonly deployed alongside kube-apiserver)&lt;/li&gt;
&lt;li&gt;Or &lt;strong&gt;hosted as a separate, external cluster&lt;/strong&gt;, often in high-availability production environments.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Is Etcd Distributed?
&lt;/h2&gt;

&lt;p&gt;Yes, &lt;strong&gt;etcd is a distributed system&lt;/strong&gt; designed for fault-tolerance and high availability. You can run multiple instances (etcd nodes or members) in a cluster.&lt;/p&gt;

&lt;p&gt;To maintain consistency across nodes, etcd uses the &lt;strong&gt;RAFT consensus algorithm&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Quick Look at the RAFT Algorithm
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://raft.github.io/" rel="noopener noreferrer"&gt;RAFT consensus algorithm&lt;/a&gt; ensures that the etcd cluster agrees on the current state, even in the presence of failures. &lt;/p&gt;

&lt;p&gt;Here's how it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Among all nodes, one is elected as the &lt;strong&gt;leader&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;leader handles all client requests&lt;/strong&gt; (writes) and replicates changes to follower nodes.&lt;/li&gt;
&lt;li&gt;If the leader goes down, &lt;strong&gt;a new leader is automatically elected&lt;/strong&gt; from the followers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensures strong consistency, meaning all clients see the same data regardless of which node they connect to.&lt;/p&gt;

&lt;p&gt;ScyllaDB, a distributed NoSQL database also uses the RAFT algorithm for leader elections. Refer &lt;a href="https://opensource.docs.scylladb.com/stable/architecture/raft.html" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Storage Engine and Data Model - How Etcd Stores Data
&lt;/h2&gt;

&lt;p&gt;Just like many traditional databases use a storage engine to handle how data is written to disk, etcd does the same.&lt;br&gt;
For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MySQL uses &lt;strong&gt;&lt;a href="https://dev.mysql.com/doc/refman/8.4/en/innodb-storage-engine.html" rel="noopener noreferrer"&gt;InnoDB&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://sqlite.org/" rel="noopener noreferrer"&gt;SQLite&lt;/a&gt;&lt;/strong&gt; has its own built-in storage engine&lt;/li&gt;
&lt;li&gt;MongoDB uses &lt;strong&gt;&lt;a href="https://www.mongodb.com/docs/manual/core/wiredtiger/" rel="noopener noreferrer"&gt;WiredTiger&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each storage engine follows a different data structure design - like B+ Trees (great for read-heavy operations) or LSM Trees (optimized for write-heavy workloads).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So what does etcd use?&lt;/strong&gt;&lt;br&gt;
Etcd uses a &lt;strong&gt;storage engine called BoltDB (specifically, a fork called bbolt).&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;BoltDB is a &lt;strong&gt;B+ Tree-based key-value store&lt;/strong&gt; that persists data to disk and provides excellent support for consistent and predictable reads, which aligns perfectly with etcd's goal of being a strongly-consistent store for configuration data.&lt;/p&gt;

&lt;p&gt;You can read more details &lt;a href="https://etcd.io/docs/v3.5/learning/data_model/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Etcd Stores Data (and How It's Different from Redis)
&lt;/h2&gt;

&lt;p&gt;Etcd stores data using a B+ Tree–based storage engine called &lt;strong&gt;bbolt&lt;/strong&gt; (a fork of BoltDB), which writes data &lt;strong&gt;persistently to disk&lt;/strong&gt;. Unlike Redis, which primarily keeps data in &lt;strong&gt;memory&lt;/strong&gt; for lightning-fast access (and optionally persists it), etcd is designed for &lt;strong&gt;strong consistency and durability&lt;/strong&gt;, even across restarts or crashes.&lt;/p&gt;

&lt;p&gt;It uses a &lt;strong&gt;multi-version concurrency control (MVCC) model&lt;/strong&gt;, where every update creates a new revision instead of modifying data in-place. &lt;br&gt;
This allows etcd to support features like &lt;strong&gt;watching changes, accessing historical versions&lt;/strong&gt;, and &lt;strong&gt;time-travel queries&lt;/strong&gt; - all while keeping disk usage optimized using compaction.&lt;/p&gt;

&lt;p&gt;This makes etcd an ideal choice for systems like Kubernetes where &lt;strong&gt;data integrity&lt;/strong&gt; and &lt;strong&gt;change tracking&lt;/strong&gt; are more critical than raw speed.&lt;/p&gt;

&lt;p&gt;You can read more about the differences &lt;a href="https://www.dragonflydb.io/databases/compare/redis-vs-etcd" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  So, if it stores all the history, won't the disk/memory gets full?
&lt;/h2&gt;

&lt;p&gt;Etcd periodically performs &lt;strong&gt;&lt;a href="https://etcd.io/docs/v3.2/op-guide/maintenance/" rel="noopener noreferrer"&gt;compaction&lt;/a&gt;&lt;/strong&gt;, which cleans up old revisions and reduces disk usage while keeping recent history intact.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;That's all I've learned about Etcd so far. It might not be perfect or super in-depth, but I feel like I now have a decent understanding of how things work under the hood. I'm still exploring and learning about it - and this is just the beginning.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>kubernetes</category>
      <category>programming</category>
      <category>opensource</category>
      <category>etcd</category>
    </item>
  </channel>
</rss>
