<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Satyaki</title>
    <description>The latest articles on Forem by Satyaki (@blackzu).</description>
    <link>https://forem.com/blackzu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/blackzu"/>
    <language>en</language>
    <item>
      <title>Understanding Kube-proxy &amp; CoreDNS in Kubernetes no bluff</title>
      <dc:creator>Satyaki</dc:creator>
      <pubDate>Thu, 22 Jan 2026 15:23:21 +0000</pubDate>
      <link>https://forem.com/blackzu/understanding-kube-proxy-coredns-in-kubernetes-no-bluff-23bc</link>
      <guid>https://forem.com/blackzu/understanding-kube-proxy-coredns-in-kubernetes-no-bluff-23bc</guid>
      <description>&lt;p&gt;🛠 Setting the Stage: A Kind Cluster&lt;/p&gt;

&lt;p&gt;Kubernetes is full of magic, but one of its most fascinating components is kube-proxy. It’s the silent operator that ensures traffic hitting a Service gets distributed across the right Pods. Under the hood, kube-proxy leverages Linux iptables to make this happen. Let’s peel back the layers and see it in action.&lt;/p&gt;

&lt;p&gt;For this demo, I spun up a 3-node Kind cluster. On top of it, I deployed a simple nginx Deployment exposed via a ClusterIP Service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uk84g5ojzvn24hvx408.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uk84g5ojzvn24hvx408.png" alt=" " width="800" height="48"&gt;&lt;/a&gt;&lt;br&gt;
Here’s the deployment and service in action:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5jq98x81jx6sq0ezoli.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5jq98x81jx6sq0ezoli.png" alt=" " width="800" height="65"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📜 Peeking into iptables&lt;/p&gt;

&lt;p&gt;Now comes the fun part. I logged into one of the nodes where a Pod is running and listed the NAT rules in the KUBE-SERVICES chain:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbgtit5b6cok39fcyuj7d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbgtit5b6cok39fcyuj7d.png" alt=" " width="800" height="86"&gt;&lt;/a&gt;&lt;br&gt;
Notice the entry for our nginx-deployment Service. The destination IP here is the ClusterIP of the Service. This is kube-proxy’s starting point for redirecting traffic&lt;/p&gt;

&lt;p&gt;🔀 Diving into the Service Chain&lt;/p&gt;

&lt;p&gt;Every Service gets its own chain. For nginx, that’s KUBE-SVC-WRNOD73BKRQH4VVX. Let’s inspect it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ge4gfetafngmbrj0xmw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ge4gfetafngmbrj0xmw.png" alt=" " width="800" height="66"&gt;&lt;/a&gt;&lt;br&gt;
And here’s the magic:&lt;br&gt;
When traffic hits the ClusterIP, kube-proxy rewrites it to one of the Pod IPs backing the Deployment.&lt;br&gt;
The rules show a probability ratio — in this case, 50/50. That means half the traffic goes to one Pod, and the other half to the second Pod.&lt;br&gt;
This is how kube-proxy achieves load balancing using nothing more than iptables.&lt;br&gt;
So, what did we just see?&lt;/p&gt;

&lt;p&gt;ClusterIP → Pod IPs translation via iptables.&lt;br&gt;
Masquerading ensures the source IP is rewritten correctly.&lt;br&gt;
Probability rules distribute traffic evenly across endpoints&lt;/p&gt;

&lt;p&gt;🌐 How DNS Works in the Cluster&lt;/p&gt;

&lt;p&gt;So far, we’ve seen how kube-proxy handles traffic routing and load balancing. But how does your application even know where to send requests? That’s where CoreDNS comes in.&lt;br&gt;
CoreDNS acts as the nameserver inside Kubernetes, resolving Service names into their corresponding ClusterIPs. Let’s walk through it step by step.&lt;/p&gt;

&lt;p&gt;🔍 Inspecting the kube-dns Service&lt;/p&gt;

&lt;p&gt;In the kube-system namespace, you’ll find the kube-dns Service. This is essentially the front door to CoreDNS:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkbhtfrcmu171avcrp6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkbhtfrcmu171avcrp6u.png" alt=" " width="800" height="44"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📄 The resolv.conf File&lt;/p&gt;

&lt;p&gt;Inside Pods, the resolv.conf file contains the nameserver details and DNS search domains. This is how Kubernetes ensures that when you query something like nginx-deployment.default.svc.cluster.local, it knows how to resolve it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz598u8m5l5i21bint9vt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz598u8m5l5i21bint9vt.png" alt=" " width="738" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🧪 Testing with nslookup&lt;/p&gt;

&lt;p&gt;Let’s put it to the test. Logging into a node and running an nslookup shows the DNS resolution in action:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnog9vvihm632zdd02kyk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnog9vvihm632zdd02kyk.png" alt=" " width="614" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And it works exactly as expected — the Service name resolves to the ClusterIP, which kube-proxy then maps to the Pod IPs.&lt;/p&gt;

&lt;p&gt;🎯 Wrapping It All Up&lt;/p&gt;

&lt;p&gt;Between kube-proxy and CoreDNS, Kubernetes ensures that:&lt;/p&gt;

&lt;p&gt;Traffic hitting a Service is load balanced across Pods.&lt;br&gt;
Service names are resolved seamlessly into ClusterIPs.&lt;br&gt;
Applications don’t need to worry about IP addresses — they just use DNS names. These two components are the backbone of Kubernetes networking. Without them, Services wouldn’t be discoverable or scalable.&lt;br&gt;
🔥 And that’s the no-bluff walkthrough of kube-proxy and CoreDNS — two vital pieces of the Kubernetes puzzle. Next time you deploy an app, you’ll know exactly how the traffic finds its way to the right Pod.&lt;/p&gt;

&lt;p&gt;Thats what kube-proxy does. Isnt it really cool ? &lt;/p&gt;

</description>
      <category>devops</category>
      <category>networking</category>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
