<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Yuva</title>
    <description>The latest articles on Forem by Yuva (@ypeavler).</description>
    <link>https://forem.com/ypeavler</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ypeavler"/>
    <language>en</language>
    <item>
      <title>Kubernetes in a Hurry: From kube-proxy to ServiceMesh(Q&amp;A Format)</title>
      <dc:creator>Yuva</dc:creator>
      <pubDate>Sun, 04 Jan 2026 03:01:57 +0000</pubDate>
      <link>https://forem.com/ypeavler/kubernetes-in-a-hurry-from-kube-proxy-to-servicemeshqa-format-4ji6</link>
      <guid>https://forem.com/ypeavler/kubernetes-in-a-hurry-from-kube-proxy-to-servicemeshqa-format-4ji6</guid>
      <description>&lt;p&gt;&lt;strong&gt;Test Your Knowledge&lt;/strong&gt;&lt;br&gt;
Ready to test your understanding? Take the &lt;strong&gt;&lt;a href="https://ypeavler.github.io/blog/2026/01/01/networking-basics-quiz.html" rel="noopener noreferrer"&gt;Networking Quiz&lt;/a&gt;&lt;/strong&gt; — 34 questions covering everything from ARP to service mesh.&lt;/p&gt;


&lt;h2&gt;
  
  
  Part 1: Kubernetes Networking
&lt;/h2&gt;

&lt;p&gt;&lt;a id="what-are-the-three-fundamental-requirements-of-kubernetes-networking"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: What are the three fundamental requirements of Kubernetes networking?
&lt;/h4&gt;

&lt;p&gt;Kubernetes has a simple networking model with three fundamental requirements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Every pod gets its own IP address&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pods can communicate with all other pods&lt;/strong&gt;: Without NAT&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node-to-Pod Communication&lt;/strong&gt;: All cluster Nodes must be able to communicate with all Pods without NAT&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This model is implemented by &lt;strong&gt;CNI (Container Network Interface)&lt;/strong&gt; plugins.&lt;/p&gt;



&lt;p&gt;&lt;a id="whats-the-basic-connectivity-neeeded-to-run-k8s"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: What are the basic connectivity requirements to run k8s?
&lt;/h4&gt;

&lt;p&gt;1.&lt;strong&gt;Static IP Addresses&lt;/strong&gt;: All nodes must be assigned static IP addresses or DHCP reservations. Using dynamic IPs that change can break cluster communication and etcd quorum.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Full L2/L3 Connectivity&lt;/strong&gt;: Every node must have full network connectivity to every other node in the cluster. This can be over a private or public network, provided there is no NAT between nodes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unique Identifiers&lt;/strong&gt;: Each node must have a unique hostname, MAC address, and product_uuid (found in /sys/class/dmi/id/product_uuid)&lt;/li&gt;
&lt;/ol&gt;



&lt;p&gt;&lt;a id="whats-the-OS-requirement-needed-to-run-k8s"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: What are the OS requirements requirements to run k8s?
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Disable Swap&lt;/strong&gt;: Swap must be disabled on all nodes for the kubelet to function correctly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kernel Modules&lt;/strong&gt;: Ensure br_netfilter and overlay modules are loaded to allow bridged traffic to be processed by iptables.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time Synchronization&lt;/strong&gt;: Highly accurate time sync (e.g., via Chrony or NTP) is required across all nodes to prevent certificate validation failures and etcd instability.&lt;/li&gt;
&lt;/ol&gt;



&lt;p&gt;&lt;a id="what-is-cni"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: What is CNI?
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;CNI (Container Network Interface)&lt;/em&gt; is the standard for Kubernetes networking plugins. When a pod starts, it's like a new apartment being built. The CNI plugin is like the city planning department that assigns the new apartment an address (IP address), connects it to the street (creates veth pair), and gives the resident a mailbox (network namespace). The pod can now send and receive letters (packets) just like any other apartment in the city!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjg4032utrkw2b2fpmq4v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjg4032utrkw2b2fpmq4v.png" alt=" " width="672" height="478"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;a id="what-are-the-cni-plugin-responsibilities"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: What are the CNI plugin responsibilities?
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Create network namespace for the pod&lt;/li&gt;
&lt;li&gt;Create veth pair (one end in pod, one in host)&lt;/li&gt;
&lt;li&gt;Assign IP address from IPAM (IP Address Management) — IPAM allocates IPs from a configured CIDR range&lt;/li&gt;
&lt;li&gt;Configure routes so pod can reach other pods and services&lt;/li&gt;
&lt;li&gt;Set up overlay network (if needed) for cross-node communication&lt;/li&gt;
&lt;/ol&gt;



&lt;p&gt;&lt;a id="what-are-popular-cni-plugins"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: What are popular CNI plugins?
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;CNI Plugin&lt;/th&gt;
&lt;th&gt;Approach&lt;/th&gt;
&lt;th&gt;Overlay Protocol&lt;/th&gt;
&lt;th&gt;Key Features&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cilium&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;eBPF-based routing&lt;/td&gt;
&lt;td&gt;Geneve, VXLAN, or native&lt;/td&gt;
&lt;td&gt;Network policies, observability, service mesh integration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Calico&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;BGP routing&lt;/td&gt;
&lt;td&gt;VXLAN or native&lt;/td&gt;
&lt;td&gt;Network policies, BGP integration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Flannel&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Simple overlay&lt;/td&gt;
&lt;td&gt;VXLAN&lt;/td&gt;
&lt;td&gt;Simple, minimal configuration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Weave&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Mesh overlay&lt;/td&gt;
&lt;td&gt;Custom (sleeve/fastdp)&lt;/td&gt;
&lt;td&gt;Automatic mesh networking&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For enterprise and multi-tenant Kubernetes deployments, &lt;strong&gt;Cilium&lt;/strong&gt; and &lt;strong&gt;Calico&lt;/strong&gt; are often preferred due to their robust network policy enforcement, performance benefits (eBPF for Cilium), and integration with BGP for native routing, which are critical for security and scalability.&lt;/p&gt;



&lt;p&gt;&lt;a id="what-is-cilium-and-what-makes-it-special"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: What makes cilium special?
&lt;/h4&gt;

&lt;p&gt;Cilium uses eBPF (extended Berkeley Packet Filter) for high-performance networking:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;eBPF-based routing&lt;/strong&gt;: Faster than iptables, no kernel bypass needed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network policies&lt;/strong&gt;: Enforced at the kernel level using eBPF programs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observability&lt;/strong&gt;: Built-in metrics and tracing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service mesh integration&lt;/strong&gt;: Can replace kube-proxy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Routing modes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Geneve overlay (default)&lt;/strong&gt; — Uses Geneve encapsulation for cross-node pod communication. Default choice when overlay is needed. Provides rich metadata in TLV options for advanced network policy enforcement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VXLAN overlay&lt;/strong&gt; — Alternative to Geneve for compatibility with environments that don't support Geneve. Similar functionality but without TLV extensibility.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Native routing&lt;/strong&gt; — No overlay encapsulation. Routes pod IPs directly through the underlying network. Requires routable pod IPs and BGP or static routes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BGP routing&lt;/strong&gt; — Uses BGP to advertise pod CIDR routes to network infrastructure. Enables native routing with dynamic route distribution. Works with routers, cloud provider route tables, and other BGP-speaking devices.&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;&lt;a id="what-is-ipam-ip-address-management"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: What is IPAM (IP Address Management)?
&lt;/h4&gt;

&lt;p&gt;IPAM is the component of CNI plugins that manages IP address allocation. Each CNI plugin includes an IPAM plugin (or uses a standalone one like &lt;code&gt;host-local&lt;/code&gt;) that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allocates IP addresses from a configured CIDR range (e.g., 10.244.0.0/16)&lt;/li&gt;
&lt;li&gt;Tracks which IPs are assigned to which pods&lt;/li&gt;
&lt;li&gt;Releases IPs when pods are deleted&lt;/li&gt;
&lt;li&gt;Prevents IP conflicts by ensuring each pod gets a unique IP&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;&lt;a id="how-do-pods-on-the-same-node-communicate"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: How do pods on the same node communicate?
&lt;/h4&gt;

&lt;p&gt;When two pods are on the same node, they communicate through the node's bridge. When two pods are on the same node, it's like two apartments in the same building. You write a letter to your neighbor, drop it in the building's mailroom (bridge), and the mailroom immediately delivers it to your neighbor's apartment. No need for the postal service—it's all handled within the building!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3eexfavaioei09iw8yhb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3eexfavaioei09iw8yhb.png" alt=" " width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;a id="how-do-pods-on-different-nodes-communicate"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: How do pods on different nodes communicate?
&lt;/h4&gt;

&lt;p&gt;When pods are on different nodes, the CNI plugin uses an overlay network (VXLAN or Geneve). It's like sending a letter from one building to another across town. You write your letter (original packet) and put it in an inner envelope addressed to your friend's apartment (destination pod IP). The building's mailroom (CNI plugin) puts that inner envelope inside an outer envelope addressed to the destination building (node IP). The postal service (underlay network) delivers the outer envelope to the destination building, where the mailroom there opens it and delivers the inner envelope to your friend's apartment. Your friend never sees the outer envelope—they just receive your letter!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31dgnwm40b6bh28b3pkk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31dgnwm40b6bh28b3pkk.png" alt=" " width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;a id="why-do-we-need-kubernetes-services"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: Why do we need Kubernetes Services?
&lt;/h4&gt;

&lt;p&gt;Pods are ephemeral — they can be created, destroyed, and moved. &lt;em&gt;Services&lt;/em&gt; provide a stable endpoint for pods. Instead of tracking individual pod IPs (which change constantly), applications use a Service IP that remains constant. kube-proxy maintains the mapping between Service IPs and pod IPs, automatically updating it as pods are created or destroyed.&lt;/p&gt;



&lt;p&gt;&lt;a id="what-are-the-different-service-types"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: What are the different Service types?
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Service Type&lt;/th&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;th&gt;How It Works&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ClusterIP&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Internal communication&lt;/td&gt;
&lt;td&gt;Virtual IP (10.96.0.0/12) that kube-proxy routes to pod IPs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;NodePort&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;External access via node IP&lt;/td&gt;
&lt;td&gt;Opens a port (30000-32767) on all nodes, routes to ClusterIP&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;LoadBalancer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cloud provider integration&lt;/td&gt;
&lt;td&gt;Creates external load balancer, routes to NodePort&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ExternalName&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;External service alias&lt;/td&gt;
&lt;td&gt;DNS CNAME to external service&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Headless&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Direct pod access&lt;/td&gt;
&lt;td&gt;No ClusterIP, DNS returns pod IPs directly&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;p&gt;&lt;a id="how-does-clusterip-work"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: How does ClusterIP work?
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;When a Service is created, Kubernetes assigns it a ClusterIP (a virtual IP, e.g., 10.96.0.100) from the cluster's service CIDR (typically 10.96.0.0/12)&lt;/li&gt;
&lt;li&gt;kube-proxy on each node creates iptables rules that map the ClusterIP → Pod IPs (based on Endpoints/EndpointSlices)&lt;/li&gt;
&lt;li&gt;When traffic arrives at a node destined for the ClusterIP, kube-proxy's iptables rules perform DNAT (Destination NAT), rewriting the destination IP from the ClusterIP to a selected pod IP (load balanced across available pods)&lt;/li&gt;
&lt;li&gt;If the selected pod is on a different node, the overlay network (VXLAN/Geneve) handles routing the packet to the destination pod's node&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp00wb20bosydmma2iikk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp00wb20bosydmma2iikk.png" alt=" " width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;a id="what-is-kube-proxy"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: What is kube-proxy?
&lt;/h4&gt;

&lt;p&gt;Kube-proxy manages Service-to-Pod connectivity. It is a network proxy that runs on each node and maintains network rules (usually via iptables or IPVS) to map Kubernetes Service virtual IPs to the actual Pod IPs assigned by the CNI. It has three modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;iptables mode&lt;/strong&gt; (default): Creates iptables rules for Service IP → Pod IP mapping. Fast and efficient, but rules can become large with many services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ipvs mode&lt;/strong&gt;: Uses Linux IPVS (IP Virtual Server) for load balancing. Better performance and scalability than iptables for large clusters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;userspace mode&lt;/strong&gt; (deprecated): Proxy runs in userspace. Slower and rarely used.&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;&lt;a id="can-you-show-an-example-of-how-kube-proxy-redirects-service-traffic"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: Can you show an example of how kube-proxy redirects Service traffic?
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Service 10.96.0.100:80 → Pods 10.244.1.5:8080, 10.244.2.7:8080&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;iptables &lt;span class="nt"&gt;-t&lt;/span&gt; nat &lt;span class="nt"&gt;-L&lt;/span&gt; KUBE-SERVICES
Chain KUBE-SERVICES
KUBE-SVC-XXX  tcp  &lt;span class="nt"&gt;--&lt;/span&gt;  anywhere  10.96.0.100  tcp dpt:80

&lt;span class="nv"&gt;$ &lt;/span&gt;iptables &lt;span class="nt"&gt;-t&lt;/span&gt; nat &lt;span class="nt"&gt;-L&lt;/span&gt; KUBE-SVC-XXX
Chain KUBE-SVC-XXX
KUBE-SEP-AAA  all  &lt;span class="nt"&gt;--&lt;/span&gt;  anywhere  anywhere  statistic mode random probability 0.5
KUBE-SEP-BBB  all  &lt;span class="nt"&gt;--&lt;/span&gt;  anywhere  anywhere  &lt;span class="c"&gt;# remaining 50%&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;p&gt;&lt;a id="why-do-we-need-endpoints"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: Why do we need Endpoints?
&lt;/h4&gt;

&lt;p&gt;Endpoints solve a fundamental problem in Kubernetes: &lt;strong&gt;Services have stable IPs, but Pods have ephemeral IPs that change constantly.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The problem:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Service provides a stable virtual IP (e.g., 10.96.0.100) that applications can use&lt;/li&gt;
&lt;li&gt;But pods are ephemeral—they get new IPs every time they start, restart, or move to a different node&lt;/li&gt;
&lt;li&gt;When a pod is created, destroyed, or scaled, its IP changes&lt;/li&gt;
&lt;li&gt;kube-proxy needs to know which actual pod IPs to route traffic to&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Without Endpoints:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kube-proxy would have to constantly query the Kubernetes API to find pod IPs&lt;/li&gt;
&lt;li&gt;This would be inefficient and slow&lt;/li&gt;
&lt;li&gt;There would be no single source of truth for "which pods belong to this Service?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;With Endpoints:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes automatically creates and maintains an Endpoints resource for each Service&lt;/li&gt;
&lt;li&gt;The Endpoints resource lists all current pod IPs that match the Service selector&lt;/li&gt;
&lt;li&gt;kube-proxy watches the Endpoints resource (not individual pods)&lt;/li&gt;
&lt;li&gt;When pods change, the Endpoints resource is updated automatically&lt;/li&gt;
&lt;li&gt;kube-proxy gets notified of changes and updates its routing rules (iptables/IPVS)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufg06qmmpyi08ocr1kl2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufg06qmmpyi08ocr1kl2.png" alt=" " width="800" height="186"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;a id="what-are-endpointslices"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: What are EndpointSlices?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;A newer, more scalable alternative to Endpoints (introduced in Kubernetes 1.16, GA in 1.21)&lt;/li&gt;
&lt;li&gt;Splits endpoints across multiple slice resources (up to 100 endpoints per slice)&lt;/li&gt;
&lt;li&gt;Reduces the size of individual resources, improving performance in large clusters&lt;/li&gt;
&lt;li&gt;Provides better scalability: a Service with 1000 pods creates ~10 EndpointSlices instead of 1 large Endpoints resource&lt;/li&gt;
&lt;li&gt;Includes additional metadata like topology hints (which zone/node pods are in)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example EndpointSlice:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;discovery.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;EndpointSlice&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend-abc123&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes.io/service-name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
&lt;span class="na"&gt;addressType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IPv4&lt;/span&gt;
&lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
  &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
&lt;span class="na"&gt;endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10.244.1.5"&lt;/span&gt;
  &lt;span class="na"&gt;conditions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ready&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;nodeName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node-1&lt;/span&gt;
  &lt;span class="na"&gt;zone&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-west-1a&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10.244.2.10"&lt;/span&gt;
  &lt;span class="na"&gt;conditions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ready&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;nodeName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node-2&lt;/span&gt;
  &lt;span class="na"&gt;zone&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-west-1b&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How kube-proxy uses them:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kube-proxy watches Endpoints or EndpointSlices (EndpointSlices preferred in modern clusters)&lt;/li&gt;
&lt;li&gt;When endpoints change (pod created/destroyed), kube-proxy updates iptables or IPVS rules&lt;/li&gt;
&lt;li&gt;Rules map Service IP → Pod IPs, enabling load balancing across pods&lt;/li&gt;
&lt;li&gt;The watch mechanism ensures rules stay synchronized with actual pod state&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why EndpointSlices matter:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: Smaller resources mean faster API server processing and less network traffic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: Can handle services with thousands of pods without creating massive single resources&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Topology awareness&lt;/strong&gt;: Includes zone/node information for better routing decisions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Future-proof&lt;/strong&gt;: Foundation for advanced features like topology-aware routing&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;a id="what-is-coredns"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: What is CoreDNS?
&lt;/h4&gt;

&lt;p&gt;CoreDNS is the default DNS server in Kubernetes. It:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs as a Deployment in the &lt;code&gt;kube-system&lt;/code&gt; namespace&lt;/li&gt;
&lt;li&gt;Watches Kubernetes Services and Endpoints&lt;/li&gt;
&lt;li&gt;Automatically creates DNS records for all Services&lt;/li&gt;
&lt;li&gt;Resolves service names to Service IPs (ClusterIP)&lt;/li&gt;
&lt;li&gt;Supports custom DNS entries via ConfigMaps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a pod queries &lt;code&gt;backend.default.svc.cluster.local&lt;/code&gt;, CoreDNS returns the Service IP (e.g., 10.96.0.100), which kube-proxy then routes to an actual pod IP.&lt;/p&gt;




&lt;p&gt;&lt;a id="how-does-service-discovery-work-with-dns"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: How does service discovery work with DNS?
&lt;/h4&gt;

&lt;p&gt;Kubernetes provides DNS for services via CoreDNS. Applications can use service names (e.g., &lt;code&gt;backend.default.svc.cluster.local&lt;/code&gt;) instead of IP addresses. CoreDNS resolves service names to Service IPs, which kube-proxy then routes to pod IPs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6bvcqu0dxt3vi54wqxu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6bvcqu0dxt3vi54wqxu.png" alt=" " width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a id="what-is-the-dns-naming-format"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: What is the DNS naming format?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Format: &lt;code&gt;&amp;lt;service&amp;gt;.&amp;lt;namespace&amp;gt;.svc.cluster.local&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Short form: &lt;code&gt;&amp;lt;service&amp;gt;.&amp;lt;namespace&amp;gt;&lt;/code&gt; or just &lt;code&gt;&amp;lt;service&amp;gt;&lt;/code&gt; (same namespace)&lt;/li&gt;
&lt;li&gt;Example: &lt;code&gt;backend.default.svc.cluster.local&lt;/code&gt; → &lt;code&gt;10.96.0.100&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;a id="what-is-ingress-and-how-does-it-relate-to-services"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: What is Ingress and how does it relate to Services?
&lt;/h4&gt;

&lt;p&gt;Ingress provides HTTP/HTTPS routing from outside the cluster to Services. Unlike Services (which provide internal cluster networking), Ingress handles external access.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ingress Controller&lt;/strong&gt;: A reverse proxy (e.g., NGINX, Traefik, Envoy) that runs in the cluster and implements Ingress rules&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ingress Resource&lt;/strong&gt;: Defines routing rules (host, path → Service)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flow&lt;/strong&gt;: External request → Ingress Controller → Service → Pod&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ingress works &lt;em&gt;on top of&lt;/em&gt; Services—it routes external traffic to the appropriate Service, which then routes to pods. For advanced routing (canary, A/B testing, mTLS), service mesh is often used instead of or alongside Ingress.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2mn8kxj15hsx7vy33d2z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2mn8kxj15hsx7vy33d2z.png" alt=" " width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a id="what-is-gateway-api"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: What is gateway api and how is it different from ingress controller?
&lt;/h4&gt;

&lt;p&gt;The API replaces the "one-size-fits-all" Ingress object with three distinct resources, each designed for a specific organizational role:&lt;br&gt;
&lt;strong&gt;GatewayClass (Infrastructure Provider)&lt;/strong&gt;: A cluster-scoped resource that defines a specific type of load balancer or proxy implementation (e.g., an AWS NLB, NGINX, or Istio).&lt;br&gt;
&lt;strong&gt;Gateway (Cluster Operator)&lt;/strong&gt;: An instantiation of a GatewayClass. It defines the actual entry point where traffic is received, including configuration for specific listeners (ports and protocols like HTTP, HTTPS, TCP, or UDP) and TLS termination.&lt;br&gt;
&lt;strong&gt;HTTPRoute / GRPCRoute (App Developer)&lt;/strong&gt;: Resource-specific rules that define how traffic should be routed from a Gateway to backend Services based on hostnames, paths, or headers&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Benefits over Ingress&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Built-in Advanced Routing&lt;/strong&gt;: Natively supports features like header-based matching, traffic splitting (canary rollouts), and request mirroring without requiring custom, non-portable annotations.&lt;br&gt;
Broader Protocol Support: Beyond HTTP/HTTPS, it officially supports gRPC, TCP, UDP, and WebSockets.&lt;br&gt;
&lt;strong&gt;Separation of Concern&lt;/strong&gt;s: Teams can manage their own routing rules (via HTTPRoute) independently from the shared infrastructure (via Gateway), reducing the risk of accidental misconfigurations across a cluster.&lt;br&gt;
&lt;strong&gt;Portability&lt;/strong&gt;: As a standardized specification, configurations are portable between different vendors (e.g., from Envoy Gateway to Traefik) without needing to rewrite complex vendor-specific rules.&lt;br&gt;
Cross-Namespace Routing: Allows a single Gateway to route traffic to Services in different namespaces securely through the use of ReferenceGrant objects.&lt;/p&gt;



&lt;p&gt;&lt;a id="what-are-network-policies"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: What are Network Policies?
&lt;/h4&gt;

&lt;p&gt;Network Policies allow you to control traffic between pods using label selectors. They act as pod-level firewalls, allowing or denying traffic based on source pod labels, destination pod labels, and ports. Network Policies are enforced by CNI plugins (Cilium, Calico) at the kernel level, before packets reach the pod.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figdu1n7j55ktssx2dd4i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figdu1n7j55ktssx2dd4i.png" alt=" " width="800" height="175"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;a id="can-you-show-an-example-networkpolicy"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: Can you show an example NetworkPolicy?
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NetworkPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;allow-frontend-to-backend&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;podSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
  &lt;span class="na"&gt;policyTypes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
  &lt;span class="na"&gt;ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;podSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This policy says: "Only pods labeled &lt;code&gt;app: frontend&lt;/code&gt; can talk to pods labeled &lt;code&gt;app: backend&lt;/code&gt; on port 8080."&lt;/p&gt;



&lt;p&gt;&lt;a id="how-are-network-policies-implemented"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: How are Network Policies implemented?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cilium&lt;/strong&gt;: Uses eBPF and Geneve TLV options for policy enforcement&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Calico&lt;/strong&gt;: Uses iptables rules and BGP for policy distribution&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flannel&lt;/strong&gt;: Does not support Network Policies (needs Calico or Cilium)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;NetworkPolicy vs service mesh authorization (multi-tenant isolation)&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;NetworkPolicy (CNI)&lt;/th&gt;
&lt;th&gt;Service mesh auth (sidecar)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Scope&lt;/td&gt;
&lt;td&gt;L3/L4 (IP, port)&lt;/td&gt;
&lt;td&gt;L7 (HTTP/gRPC) + identity&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enforcer&lt;/td&gt;
&lt;td&gt;CNI dataplane (node)&lt;/td&gt;
&lt;td&gt;Sidecar proxy (pod)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best for&lt;/td&gt;
&lt;td&gt;Blast-radius limits between namespaces/tenants; default-deny&lt;/td&gt;
&lt;td&gt;App-level allow/deny, mTLS, per-route rules&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Debug with&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;kubectl get networkpolicies -A&lt;/code&gt;, &lt;code&gt;cilium monitor&lt;/code&gt;/&lt;code&gt;calicoctl&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Sidecar logs, &lt;code&gt;AuthorizationPolicy&lt;/code&gt;/&lt;code&gt;ServerAuthorization&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a id="what-is-the-kubernetes-networking-stack"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Q: What is the Kubernetes networking stack?
&lt;/h4&gt;

&lt;p&gt;The Kubernetes networking stack is built in layers, with each layer providing specific functionality:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────┐
│  Service Mesh (L7)                  │  ← Identity, observability, traffic mgmt
│  (Istio, Linkerd)                   │
├─────────────────────────────────────┤
│  Services (kube-proxy)              │  ← Service discovery, load balancing
│  ClusterIP, NodePort, LoadBalancer  │
├─────────────────────────────────────┤
│  CNI Plugin (L2/L3)                 │  ← Pod networking, overlay networks
│  (Cilium, Calico, Flannel)          │
├─────────────────────────────────────┤
│  Linux Networking Primitives        │  ← Network namespaces, veth, bridges
└─────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key takeaways:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CNI plugins provide pod networking using native or overlay networks (VXLAN/Geneve)&lt;/li&gt;
&lt;li&gt;Services abstract pod IPs using virtual IPs and kube-proxy&lt;/li&gt;
&lt;li&gt;Network Policies control traffic between pods&lt;/li&gt;
&lt;li&gt;Service mesh adds Layer 7 capabilities on top of everything&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Part 2: Cilium Special
&lt;/h2&gt;

&lt;p&gt;&lt;a id="what-are-the-requirements-for-cilium-native-routing-mode"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: What are the requirements for Cilium native routing mode?
&lt;/h4&gt;

&lt;p&gt;Native routing mode eliminates overlay encapsulation, routing pod IPs directly through the underlying network. This requires:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Routable pod IPs&lt;/strong&gt;: Pod IPs (from the pod CIDR, e.g., 10.244.0.0/16) must be routable in your network infrastructure. The underlying network (routers, switches, cloud provider networking) must know how to route traffic to pod IPs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;BGP or static routes&lt;/strong&gt;: You need either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;BGP&lt;/strong&gt;: Cilium can use BGP to advertise pod CIDR routes to your network infrastructure (routers, cloud provider route tables)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Static routes&lt;/strong&gt;: Manually configure routes in your network infrastructure pointing pod CIDRs to Kubernetes nodes&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No IP conflicts&lt;/strong&gt;: Pod IPs must not conflict with existing IPs in your network.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;When to use native routing:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have control over network infrastructure (on-premises, custom cloud networking)&lt;/li&gt;
&lt;li&gt;You want maximum performance (no encapsulation overhead)&lt;/li&gt;
&lt;li&gt;Your network supports BGP or you can configure static routes&lt;/li&gt;
&lt;li&gt;Pod IPs can be made routable in your network&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When to use overlay (Geneve/VXLAN):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud provider environments where pod IPs aren't routable&lt;/li&gt;
&lt;li&gt;You want simplicity (no BGP/static route configuration)&lt;/li&gt;
&lt;li&gt;Network infrastructure doesn't support routing pod CIDRs&lt;/li&gt;
&lt;li&gt;You need the rich metadata capabilities of Geneve TLV options&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;a id="how-does-cilium-use-bgp"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: How does Cilium use BGP?
&lt;/h4&gt;

&lt;p&gt;Cilium supports BGP for native routing mode, allowing it to advertise pod CIDR routes to network infrastructure without using overlay encapsulation. Each Cilium node runs a BGP daemon that advertises its pod CIDR to BGP peers (routers, cloud provider route tables), enabling the network infrastructure to learn pod IP routes and route traffic directly to pods without encapsulation overhead.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fterci3jxgkey0kg9eo3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fterci3jxgkey0kg9eo3k.png" alt=" " width="800" height="189"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use cases for Cilium BGP:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;On-premises deployments&lt;/strong&gt;: Advertise pod routes to physical routers/switches&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud environments&lt;/strong&gt;: Integrate with cloud provider route tables (AWS Route Tables, Azure Route Tables, GCP Routes)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hybrid cloud&lt;/strong&gt;: Connect on-premises and cloud networks via BGP&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Large-scale clusters&lt;/strong&gt;: Native routing performs better than overlay at scale&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration with existing BGP infrastructure&lt;/strong&gt;: Works with existing network equipment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;BGP vs overlay in Cilium:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;BGP (native routing)&lt;/strong&gt;: Better performance, no encapsulation overhead, requires BGP-capable infrastructure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Overlay (Geneve/VXLAN)&lt;/strong&gt;: Works everywhere, simpler setup, adds encapsulation overhead&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configuration example:&lt;/strong&gt;&lt;br&gt;
Cilium can be configured to use BGP by enabling the BGP control plane and specifying BGP peers (routers or route reflectors). The BGP daemon then advertises pod CIDRs to peers, enabling native routing.&lt;/p&gt;




&lt;p&gt;&lt;a id="how-does-cilium-use-geneve-for-network-policy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: How does Cilium use Geneve for network policy?
&lt;/h4&gt;

&lt;p&gt;Cilium is a popular CNI plugin that uses Geneve overlay and eBPF for advanced networking. With Cilium and Geneve, when you send a letter, the building's security system (Cilium agent) checks your ID, looks up the security policy, and attaches security stickers to the outer envelope: "From: frontend-workload", "Policy: allow-frontend-to-backend", "Security Clearance: Level 3". When the letter arrives at the destination building, the security guard there reads the stickers, verifies the policy allows this communication, and only then delivers the letter. If the stickers don't match the policy, the letter is rejected!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9h7z071zb8t05s2md6x5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9h7z071zb8t05s2md6x5.png" alt=" " width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 3: Service Mesh and Workload Identity
&lt;/h2&gt;

&lt;p&gt;&lt;a id="what-problem-does-service-mesh-solve"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: What problem does service mesh solve?
&lt;/h4&gt;

&lt;p&gt;As microservices architectures became the norm in the 2010s, developers faced new challenges that infrastructure-layer networking (VLAN, VXLAN, Geneve) couldn't solve. While overlay networks could route packets between hosts using IP addresses, they operated at Layer 2/3 and couldn't help with application-layer concerns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fundamental problem:&lt;/strong&gt; In a microservices world, services are ephemeral—they start, stop, scale, and move constantly. IP addresses change. Network boundaries are fluid. Traditional networking assumptions broke down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this was painful:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service discovery&lt;/strong&gt;: You couldn't hardcode IPs because containers/pods get new IPs every time they restart or scale. Every service needed to implement its own service discovery (Consul, etcd, custom solutions), leading to inconsistency across teams. When "backend" had 10 instances, which one should you call? How do you load balance? Infrastructure networking could route packets, but couldn't answer "where is the backend service?"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security between services&lt;/strong&gt;: Every team was implementing their own TLS, authentication, and authorization logic. Some services used certificates, others used API keys, some had no security at all. This created security gaps, inconsistent implementations, and maintenance nightmares. When you had 50 microservices, you had 50 different security implementations to maintain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Observability&lt;/strong&gt;: When a user request failed, which service was the problem? Was it the frontend? The API gateway? The auth service? The database? There was no way to trace a request as it flowed through multiple services. Each service logged independently, but correlating logs across services was nearly impossible. You couldn't see the "big picture" of how services communicated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Traffic management&lt;/strong&gt;: Every service needed to implement retry logic, timeouts, circuit breakers, and load balancing. When backend was slow, frontend would retry—but how many times? With what backoff? What if backend was completely down—should you fail fast or keep retrying? Each team made different decisions, leading to cascading failures and inconsistent behavior.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Zero-trust security&lt;/strong&gt;: Traditional network security relied on firewalls and network boundaries: "trust everything inside the network, block everything outside." But in microservices, there is no "inside"—services move, IPs change, and the network boundary is meaningless. An attacker who compromised one service could access all services on the same network. You needed identity-based security: "trust based on who you are, not where you are."&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The breaking point:&lt;/strong&gt; Developers were writing the same networking code (retry logic, TLS, metrics, tracing, service discovery) in every microservice. This was expensive, error-prone, and inconsistent. Service mesh emerged around 2016-2018 (with projects like Linkerd and Istio) to solve these problems by moving networking concerns out of application code and into a dedicated infrastructure layer that worked transparently for all services.&lt;/p&gt;




&lt;p&gt;&lt;a id="what-is-a-service-mesh"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: What is a service mesh?
&lt;/h4&gt;

&lt;p&gt;A &lt;em&gt;service mesh&lt;/em&gt; is a dedicated infrastructure layer that handles service-to-service communication, security, observability, and traffic management for microservices. Unlike VLAN/VXLAN/Geneve which route packets at Layer 2/3 (infrastructure layer) using IP addresses, service mesh routes requests at Layer 7 (application layer) using service names and DNS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture:&lt;/strong&gt; A service mesh consists of a control plane (e.g., Istio's istiod) that distributes configuration to sidecar proxies (e.g., Envoy) running alongside each application container. The sidecars handle mTLS, routing, and observability transparently. For detailed architecture diagrams, see the &lt;a href="https://istio.io/latest/docs/ops/deployment/architecture/" rel="noopener noreferrer"&gt;Istio Architecture documentation&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;a id="how-does-service-mesh-integrate-with-kubernetes-networking"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: How does service mesh integrate with Kubernetes networking?
&lt;/h4&gt;

&lt;p&gt;Service mesh works &lt;em&gt;on top of&lt;/em&gt; Kubernetes networking, adding Layer 7 capabilities. The integration flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;App uses service name (&lt;code&gt;backend.default.svc.cluster.local&lt;/code&gt; or just &lt;code&gt;backend&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Sidecar resolves via Kubernetes DNS → Service IP (10.96.0.100)&lt;/li&gt;
&lt;li&gt;Service discovery → Pod IP (10.244.2.10)&lt;/li&gt;
&lt;li&gt;Overlay network (VXLAN/Geneve) routes packet to destination pod&lt;/li&gt;
&lt;li&gt;Service mesh adds mTLS, observability, traffic management&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Service mesh leverages Kubernetes service discovery and DNS, then adds identity-based security, request-level observability, and traffic management on top of the existing pod-to-pod networking.&lt;/p&gt;




&lt;p&gt;&lt;a id="how-does-service-mesh-relate-to-overlay-networks"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: How does service mesh relate to overlay networks?
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;💡 Key Insight&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Service mesh works &lt;em&gt;on top of&lt;/em&gt; overlay networks. You still need VXLAN/Geneve to route packets between hosts/containers at the infrastructure layer. Service mesh then adds Layer 7 routing and capabilities transparently to applications.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;While VLAN/VXLAN/Geneve are like the postal service that routes envelopes based on addresses (infrastructure layer), service mesh is like a &lt;strong&gt;smart assistant who reads your letter and routes it based on what you wrote&lt;/strong&gt; (application layer). You write "Send this to the accounting department" (service name), and the assistant looks up which building and apartment that is, puts your letter in the right envelope (with security and tracking), and sends it. The assistant also adds a return envelope with your identity certificate, so the recipient knows it's really from you. The postal service (overlay network) still delivers the physical envelope, but the assistant (service mesh) handles the "who talks to whom" and "is this allowed" logic.&lt;/p&gt;




&lt;p&gt;&lt;a id="what-is-GAMMA?"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: What is GAMMA?
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;The GAMMA (Gateway API for Mesh Management and Administration)&lt;/strong&gt; routes internal (East-West) traffic by repurposing standard Gateway API Route objects (like HTTPRoute or GRPCRoute) but changing their Parent Reference.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Graduation&lt;/strong&gt;: GAMMA's work for supporting service mesh use cases (East-West traffic) graduated to the Standard Channel (GA) starting with Gateway API v1.1.0 in early 2024.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Core Feature&lt;/strong&gt;: The primary GAMMA feature—binding a Route (like HTTPRoute) directly to a Service as a parent—is fully stable and supported by major service meshes like Cilium, Istio, and Linkerd. &lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;a id="what-is-workload-identity"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: What is workload identity?
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;Workload Identity&lt;/em&gt; is a way to identify and authenticate workloads (containers, VMs, processes) using cryptographically verifiable certificates rather than IP addresses. Instead of saying "Allow mail from 10.0.0.5" (IP-based), we now say "Allow mail from the Accounting Department" (identity-based). Each workload gets a &lt;strong&gt;certificate&lt;/strong&gt; (like a company ID badge) that proves who they are. When you send a letter, you include a copy of your ID badge in the envelope. The recipient checks: "Is this person from Accounting? Yes, they're allowed to send me mail." It's like moving from checking return addresses (which can be faked) to checking photo IDs (which can't be easily forged).&lt;/p&gt;




&lt;p&gt;&lt;a id="what-is-the-evolution-of-workload-identity-and-what-problem-was-it-solving"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: What is the evolution of workload identity and what problem was it solving?
&lt;/h4&gt;

&lt;p&gt;Workload identity evolved to solve the fundamental problem that &lt;strong&gt;IP addresses are not a reliable way to identify workloads&lt;/strong&gt; in modern, dynamic environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The old way: IP-based security (pre-2010s)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Firewall rules: "Allow 10.0.0.5 to access database on port 5432"&lt;/li&gt;
&lt;li&gt;Network segmentation: "Everything in subnet 10.0.1.0/24 is trusted"&lt;/li&gt;
&lt;li&gt;This worked when servers had static IPs and rarely moved&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why IP-based security broke down:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Containers and VMs are ephemeral&lt;/strong&gt;: A container gets a new IP every time it starts. Your firewall rule for 10.0.0.5 is useless when the container restarts and gets 10.0.0.47.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scaling breaks rules&lt;/strong&gt;: When you scale from 1 backend instance to 10, you can't maintain firewall rules for each IP. You'd need to update rules constantly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workloads move&lt;/strong&gt;: A pod moves from Node A to Node B? Its IP changes. Your security rules break.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IPs can be spoofed&lt;/strong&gt;: An attacker who compromises one workload can spoof IPs to appear as another workload.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No context&lt;/strong&gt;: An IP address tells you nothing about &lt;em&gt;what&lt;/em&gt; the workload is or &lt;em&gt;who&lt;/em&gt; it belongs to. Is 10.0.0.5 the frontend? The backend? A test service? You can't tell from the IP.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The evolution:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service accounts (early 2010s)&lt;/strong&gt;: Platforms like Kubernetes introduced service accounts—a step toward identity, but platform-specific. Kubernetes service accounts only work in Kubernetes. AWS IAM roles only work in AWS. No portability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Workload identity (2018-present)&lt;/strong&gt;: Standards like &lt;a href="https://spiffe.io/" rel="noopener noreferrer"&gt;SPIFFE&lt;/a&gt; emerged to provide portable, verifiable workload identity. Each workload gets a certificate (SVID - SPIFFE Verifiable Identity Document) that proves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Who it is&lt;/strong&gt;: &lt;code&gt;spiffe://cluster.local/ns/prod/sa/frontend&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Where it came from&lt;/strong&gt;: The certificate is cryptographically signed, so it can't be forged&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What it can do&lt;/strong&gt;: Policies can be written in terms of identity, not IPs&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The breakthrough:&lt;/strong&gt; Instead of "Allow IP 10.0.0.5", you now say "Allow workloads with identity &lt;code&gt;spiffe://cluster.local/ns/prod/sa/frontend&lt;/code&gt;". The identity stays the same even when the IP changes. It works across platforms (Kubernetes, VMs, bare metal). It's cryptographically verifiable, so it can't be spoofed.&lt;/p&gt;




&lt;p&gt;&lt;a id="when-did-service-mesh-and-workload-identity-get-integrated"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: When did service mesh and workload identity get integrated?
&lt;/h4&gt;

&lt;p&gt;Service mesh and workload identity evolved separately at first, then became tightly integrated:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;2016-2017: Early service meshes&lt;/strong&gt; (Linkerd 1.0 in 2016, Istio 0.1 in 2017) initially used platform-specific identities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes service accounts in Kubernetes environments&lt;/li&gt;
&lt;li&gt;No standard identity format&lt;/li&gt;
&lt;li&gt;Identity was tied to the platform&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;2017-2018: SPIFFE emerges&lt;/strong&gt;: The SPIFFE project started in 2017 to create a standard for workload identity that works across platforms. SPIFFE provided the foundation, but service meshes weren't using it yet.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;2019-2020: The integration begins&lt;/strong&gt;: Service meshes started adopting SPIFFE:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Istio 1.4 (2020)&lt;/strong&gt;: Added SPIFFE integration, allowing Istio to issue SPIFFE SVIDs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Linkerd 2.7+ (2020)&lt;/strong&gt;: Integrated SPIFFE for workload identity&lt;/li&gt;
&lt;li&gt;This was the "marriage" - service meshes could now use standardized, portable workload identity&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;2020-present: Deep integration&lt;/strong&gt;: Modern service meshes are built around workload identity:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identity is no longer optional—it's core to how service mesh works&lt;/li&gt;
&lt;li&gt;mTLS uses workload identity certificates (SVIDs)&lt;/li&gt;
&lt;li&gt;Authorization policies are written in terms of workload identity&lt;/li&gt;
&lt;li&gt;Works across Kubernetes, VMs, and bare metal&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why the integration matters:&lt;/strong&gt; Before SPIFFE integration, service meshes were platform-locked. A Kubernetes service account identity couldn't be verified by a VM-based service. With SPIFFE, the same workload identity works everywhere, enabling true multi-platform service mesh deployments and zero-trust security across heterogeneous environments.&lt;/p&gt;




&lt;p&gt;&lt;a id="what-are-the-identity-models-used-by-service-mesh"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: What are the identity models used by service mesh?
&lt;/h4&gt;

&lt;p&gt;Service meshes support three main identity models, each with different trade-offs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SPIFFE/SVID&lt;/strong&gt;: &lt;a href="https://spiffe.io/" rel="noopener noreferrer"&gt;SPIFFE&lt;/a&gt; (Secure Production Identity Framework for Everyone) provides a standard, portable workload identity via SVIDs (SPIFFE Verifiable Identity Documents). SPIFFE identities are cryptographically verifiable certificates that work across platforms (Kubernetes, VMs, bare metal). This is the most portable and future-proof approach. Used by Istio (with SPIFFE integration) and Linkerd. &lt;strong&gt;Learn more:&lt;/strong&gt; &lt;a href="https://spiffe.io/docs/latest/" rel="noopener noreferrer"&gt;SPIFFE Documentation&lt;/a&gt;, &lt;a href="https://github.com/spiffe/spiffe/blob/main/standards/SPIFFE.md" rel="noopener noreferrer"&gt;SPIFFE Specification&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Platform service accounts&lt;/strong&gt;: Service meshes can use platform-specific service accounts (e.g., Kubernetes service accounts, AWS IAM roles, Azure managed identities) as workload identity. This is simpler to set up but ties you to a specific platform—a Kubernetes service account identity can't be verified by a VM-based service. Good for single-platform deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes service accounts&lt;/strong&gt; are the most common example. Each pod can be assigned a service account, and the service mesh uses this to identify the workload. The service account name (e.g., &lt;code&gt;frontend-sa&lt;/code&gt;) becomes part of the workload identity. However, this only works within Kubernetes—you can't use a Kubernetes service account identity to authenticate to a VM-based service. &lt;strong&gt;Learn more:&lt;/strong&gt; &lt;a href="https://kubernetes.io/docs/concepts/security/service-accounts/" rel="noopener noreferrer"&gt;Kubernetes Service Accounts&lt;/a&gt;, &lt;a href="https://istio.io/latest/docs/ops/best-practices/security/#configure-service-accounts" rel="noopener noreferrer"&gt;Using Service Accounts with Istio&lt;/a&gt;, &lt;a href="https://linkerd.io/2.14/features/automatic-mtls/#service-account-identity" rel="noopener noreferrer"&gt;Linkerd Service Account Identity&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Custom identity providers&lt;/strong&gt;: Some meshes integrate with cloud provider IAM (AWS IAM, Azure AD, GCP IAM) or custom identity systems. This allows leveraging existing identity infrastructure but requires custom integration work and may not be portable across platforms. &lt;strong&gt;Learn more:&lt;/strong&gt; &lt;a href="https://docs.aws.amazon.com/app-mesh/latest/userguide/security_iam.html" rel="noopener noreferrer"&gt;AWS App Mesh IAM Integration&lt;/a&gt;, &lt;a href="https://learn.microsoft.com/en-us/azure/service-mesh/overview" rel="noopener noreferrer"&gt;Azure Service Mesh Identity&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;a id="what-are-the-constraints-of-service-mesh"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: What are the constraints of service mesh?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;CPU overhead&lt;/em&gt;: Every request goes through a proxy, adding 1-5ms latency and consuming CPU&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Complexity&lt;/em&gt;: Debugging distributed systems with sidecars is harder; requires understanding both application and mesh behavior&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Latency&lt;/em&gt;: Additional hops add milliseconds (though often acceptable for the benefits)&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Resource consumption&lt;/em&gt;: Each workload requires a sidecar proxy, increasing memory and CPU usage&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;a id="do-i-need-service-mesh"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: Do I need service mesh?
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Short answer: It depends.&lt;/strong&gt; Service mesh solves real problems, but it's not always the right solution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The pragmatic approach:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Start simple&lt;/strong&gt;: Use API gateways, load balancers, and basic monitoring first&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add service mesh when you feel the pain&lt;/strong&gt;: When you find yourself writing the same networking code in every service, or when observability becomes impossible&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consider alternatives&lt;/strong&gt;: API gateways (Kong, Ambassador) can provide some service mesh features without the complexity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Evaluate the trade-offs&lt;/strong&gt;: Service mesh adds complexity and overhead. Make sure the benefits (security, observability, traffic management) justify the cost&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Remember&lt;/strong&gt;: Service mesh is infrastructure. Like any infrastructure, it should solve problems you actually have, not problems you might have someday. If you're not experiencing the pain points service mesh solves, you probably don't need it yet.&lt;/p&gt;




&lt;p&gt;&lt;a id="can-you-walk-through-an-example-of-frontend-calling-backend-using-identity"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: Can you walk through an example of frontend calling backend using identity?
&lt;/h4&gt;

&lt;p&gt;This section details the &lt;strong&gt;complete packet journey&lt;/strong&gt; in a Kubernetes cluster augmented with a service mesh, illustrating the interplay of all the components discussed so far. The diagram below shows the flow using SPIFFE identities for workload authentication:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55qb1m58wm7wftesc2o4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55qb1m58wm7wftesc2o4.png" alt=" " width="800" height="877"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a id="whats-the-difference-between-ip-based-and-identity-based-security-in-kubernetes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: What's the difference between IP-based and identity-based security in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;The key principle: &lt;em&gt;identity-based security&lt;/em&gt; replaces IP-based security in modern Kubernetes deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Old (IP-based):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allow 10.0.0.5&lt;/li&gt;
&lt;li&gt;Deny 10.0.0.6&lt;/li&gt;
&lt;li&gt;Network Policies based on pod IPs (which change constantly)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;New (Identity-based):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allow spiffe://cluster.local/ns/prod/sa/frontend&lt;/li&gt;
&lt;li&gt;Deny spiffe://cluster.local/ns/dev/*&lt;/li&gt;
&lt;li&gt;Authorization policies based on workload identity (which stays constant)&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;a id="where-do-i-create-identity-based-security-rules-and-who-enforces-them-in-kubernetes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: Where do I create identity-based security rules and who enforces them in Kubernetes?
&lt;/h4&gt;

&lt;p&gt;Identity-based security rules are created as &lt;strong&gt;Kubernetes Custom Resources&lt;/strong&gt; (CRDs) and enforced by the service mesh data plane (sidecar proxies).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where rules are created:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Istio&lt;/strong&gt;: Create &lt;code&gt;AuthorizationPolicy&lt;/code&gt; resources in Kubernetes:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;security.istio.io/v1beta1&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AuthorizationPolicy&lt;/span&gt;
  &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;allow-frontend-to-backend&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prod&lt;/span&gt;
  &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
    &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ALLOW&lt;/span&gt;
    &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;principals&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cluster.local/ns/prod/sa/frontend"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Linkerd&lt;/strong&gt;: Create &lt;code&gt;Server&lt;/code&gt; and &lt;code&gt;ServerAuthorization&lt;/code&gt; resources:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;policy.linkerd.io/v1beta2&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServerAuthorization&lt;/span&gt;
  &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend-authz&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prod&lt;/span&gt;
  &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
    &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;meshTLS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;identities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;frontend.prod.serviceaccount.identity.linkerd.cluster.local"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;General pattern&lt;/strong&gt;: Rules are defined as YAML manifests and applied via &lt;code&gt;kubectl apply&lt;/code&gt;, just like any Kubernetes resource.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Who enforces them:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Service mesh control plane&lt;/strong&gt; (istiod, Linkerd control plane): Distributes the rules to all sidecar proxies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sidecar proxies&lt;/strong&gt; (Envoy, Linkerd proxy): Enforce the rules at runtime—they intercept traffic, verify workload identity, and allow/deny requests based on the policies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No application code changes&lt;/strong&gt;: The application doesn't know about the rules—the sidecar proxy handles enforcement transparently&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example flow:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You create an &lt;code&gt;AuthorizationPolicy&lt;/code&gt; with &lt;code&gt;kubectl apply&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Istio control plane (istiod) reads the policy and distributes it to all Envoy sidecars&lt;/li&gt;
&lt;li&gt;When frontend tries to call backend, Envoy sidecar checks: "Does this request come from &lt;code&gt;spiffe://cluster.local/ns/prod/sa/frontend&lt;/code&gt;?"&lt;/li&gt;
&lt;li&gt;Envoy looks up the policy: "Yes, frontend is allowed to talk to backend" → Request proceeds&lt;/li&gt;
&lt;li&gt;If the identity doesn't match, Envoy rejects the request with HTTP 403&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Key insight&lt;/strong&gt;: The rules are Kubernetes resources (like Deployments or Services), but enforcement happens in the service mesh data plane (sidecar proxies), not in Kubernetes itself. This gives you identity-based security without modifying application code.&lt;/p&gt;




&lt;p&gt;&lt;a id="what-is-the-complete-packet-flow-from-app-to-app-in-kubernetes-with-service-mesh"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: What is the complete packet flow from app to app via overlay network in Kubernetes with service mesh?
&lt;/h4&gt;

&lt;p&gt;Here's the complete journey of a packet from one pod to another:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7577l9fscisv7ljfk8mi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7577l9fscisv7ljfk8mi.png" alt=" " width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Service Mesh and Identity
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://spiffe.io/" rel="noopener noreferrer"&gt;SPIFFE&lt;/a&gt;: Secure Production Identity Framework for Everyone&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://istio.io/" rel="noopener noreferrer"&gt;Istio&lt;/a&gt;: Connect, Secure, Control, and Observe Services&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://linkerd.io/" rel="noopener noreferrer"&gt;Linkerd&lt;/a&gt;: Ultra Lightweight Service Mesh for Kubernetes&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Container Networking
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/containernetworking/cni/blob/master/SPEC.md" rel="noopener noreferrer"&gt;CNI Specification&lt;/a&gt;: Container Network Interface&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cilium.io/" rel="noopener noreferrer"&gt;Cilium&lt;/a&gt;: eBPF-based Networking, Security, and Observability&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;This article is part of the "Learning in a Hurry" series, designed to help engineers quickly understand complex technical concepts through analogies and practical examples.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Networking in a Hurry: From ARP to Geneve(Q&amp;A Format)</title>
      <dc:creator>Yuva</dc:creator>
      <pubDate>Thu, 01 Jan 2026 21:31:26 +0000</pubDate>
      <link>https://forem.com/ypeavler/networking-in-a-hurry-from-arp-to-geneveqa-format-59l4</link>
      <guid>https://forem.com/ypeavler/networking-in-a-hurry-from-arp-to-geneveqa-format-59l4</guid>
      <description>&lt;p&gt;&lt;em&gt;Understanding modern cloud networking through the lens of envelopes, mailrooms, and postal services&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;I have spent quite few days debugging kubernetes networking issues. I realized that I had gaps in my understanding of networking components/terms. So I went on a mission with my AI friend to ask all the questions that I can possibly ask to better my mental model of what goes on beneath the orchestrator and below is the documentation of all that learning.  &lt;/p&gt;

&lt;p&gt;Skip to the &lt;a href="https://ypeavler.github.io/blog/2026/01/01/networking-basics-quiz.html" rel="noopener noreferrer"&gt;quiz&lt;/a&gt; if needed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 1: The Fundamentals
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The OSI Model
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Q: What is the OSI model and why do I need to know it?
&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://en.wikipedia.org/wiki/OSI_model" rel="noopener noreferrer"&gt;OSI (Open Systems Interconnection) model&lt;/a&gt; divides networking into seven layers. Each layer only communicates with the layers directly above and below it. It's your mental map for understanding how networking works.&lt;/p&gt;

&lt;h4&gt;
  
  
  Q: What are the seven layers of the OSI model?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Layer 7: Application&lt;/strong&gt; — HTTP, DNS, SSH&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Layer 6: Presentation&lt;/strong&gt; — TLS/SSL, Compression&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Layer 5: Session&lt;/strong&gt; — Connection management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Layer 4: Transport&lt;/strong&gt; — TCP, UDP (Ports)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Layer 3: Network&lt;/strong&gt; — IP (Routing)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Layer 2: Data Link&lt;/strong&gt; — Ethernet, MAC (Switching)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Layer 1: Physical&lt;/strong&gt; — Cables, Radio, Fiber&lt;/li&gt;
&lt;/ul&gt;




&lt;h4&gt;
  
  
  Q: What's the TCP/IP model?
&lt;/h4&gt;

&lt;p&gt;For practical purposes, we often use the simpler &lt;em&gt;TCP/IP model&lt;/em&gt; which combines some layers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Application Layer&lt;/strong&gt; — HTTP, DNS, SSH&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transport Layer&lt;/strong&gt; — TCP, UDP&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internet Layer&lt;/strong&gt; — IP, ICMP&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Access Layer&lt;/strong&gt; — Ethernet, Wi-Fi&lt;/li&gt;
&lt;/ul&gt;




&lt;h4&gt;
  
  
  Q: What does "L2 over L3 tunneling" mean?
&lt;/h4&gt;

&lt;p&gt;Normally, Layer 3 packets (IP packets) are encapsulated inside Layer 2 frames (Ethernet frames). This is the standard way networking works: an IP packet gets wrapped in an Ethernet frame with MAC addresses, and the frame is delivered to the next hop.&lt;/p&gt;

&lt;p&gt;"L2 over L3 tunneling" reverses this: it wraps an entire Ethernet frame (Layer 2) inside an IP packet (Layer 3). This is what technologies like VXLAN and Geneve do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why is this useful?&lt;/strong&gt; It allows you to create virtual Layer 2 networks that span across Layer 3 infrastructure. For example, you can have two VMs in different data centers that appear to be on the same Layer 2 network, even though they're separated by routers and IP networks. The original Ethernet frame (with its MAC addresses) is preserved inside the IP packet, allowing Layer 2 protocols and features to work across the tunnel.&lt;/p&gt;




&lt;h3&gt;
  
  
  Layer 2: Getting to Your Neighbor
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Q: What is Layer 2 about?
&lt;/h4&gt;

&lt;p&gt;Layer 2 is about communication within a &lt;em&gt;local network segment&lt;/em&gt;—devices that can reach each other without going through a router.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What is a MAC address?
&lt;/h4&gt;

&lt;p&gt;Every network interface card (NIC) has a unique 48-bit &lt;em&gt;MAC address&lt;/em&gt; (Media Access Control), written as six pairs of hex digits like &lt;code&gt;00:1A:2B:3C:4D:5E&lt;/code&gt;. The first 3 bytes identify the manufacturer (OUI), and the last 3 bytes are unique to the device.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Examples:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;00:50:56:xx:xx:xx&lt;/code&gt; → VMware&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;02:42:xx:xx:xx:xx&lt;/code&gt; → Docker&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;52:54:00:xx:xx:xx&lt;/code&gt; → QEMU/KVM&lt;/li&gt;
&lt;/ul&gt;




&lt;h4&gt;
  
  
  Q: What is ARP and why do I need it?
&lt;/h4&gt;

&lt;p&gt;When Host A wants to send a packet to Host B (same subnet), it knows B's IP address but not its MAC address. &lt;em&gt;ARP (Address Resolution Protocol)&lt;/em&gt; solves this.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: How does ARP work?
&lt;/h4&gt;

&lt;p&gt;Think of IP addresses as street addresses and MAC addresses as the actual mailbox. Before you can deliver a letter, you need to know which mailbox (MAC) belongs to that address (IP). ARP is like shouting down the street: "Who lives at 192.168.1.20?" and waiting for the owner to respond with their mailbox number.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73t2kvkbixs8yp8yd3if.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73t2kvkbixs8yp8yd3if.png" alt=" " width="800" height="499"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: How do I view my ARP cache?
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ip neigh
192.168.1.1 dev eth0 lladdr 00:11:22:33:44:55 REACHABLE
192.168.1.20 dev eth0 lladdr bb:bb:bb:bb:bb:bb STALE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h4&gt;
  
  
  Q: What is a switch and how does it work?
&lt;/h4&gt;

&lt;p&gt;A &lt;em&gt;switch&lt;/em&gt; is a Layer 2 device that learns which MAC addresses are on which ports by observing traffic. A switch is like a smart mail carrier who learns the neighborhood. When you send a letter, the carrier looks at the return address (source MAC) and remembers "this person lives on Elm Street." When mail arrives for that person, the carrier knows exactly which street to go to, instead of delivering to every house.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: How does MAC learning work on a switch?
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqchqyxsh5pij8iguz0iv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqchqyxsh5pij8iguz0iv.png" alt=" " width="790" height="604"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Layer 3: Getting Across Town
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Q: What is Layer 3 about?
&lt;/h4&gt;

&lt;p&gt;Layer 3 is about communication &lt;em&gt;between networks&lt;/em&gt;—when you need to go beyond your local segment.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What is an IP address?
&lt;/h4&gt;

&lt;p&gt;An &lt;em&gt;IP address&lt;/em&gt; is a unique identifier assigned to each device on a network. There are two versions in use today: IPv4 and IPv6.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IPv4 (Internet Protocol version 4):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A 32-bit number, written as four octets (8 bits each) separated by dots&lt;/li&gt;
&lt;li&gt;Example: &lt;code&gt;192.168.1.100&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Each octet ranges from 0-255&lt;/li&gt;
&lt;li&gt;Total address space: 2^32 = 4,294,967,296 addresses (~4.3 billion)&lt;/li&gt;
&lt;li&gt;Format: &lt;code&gt;xxx.xxx.xxx.xxx&lt;/code&gt; where each xxx is 0-255&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;IPv6 (Internet Protocol version 6):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A 128-bit number, written as eight groups of four hexadecimal digits separated by colons&lt;/li&gt;
&lt;li&gt;Example: &lt;code&gt;2001:0db8:85a3:0000:0000:8a2e:0370:7334&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Can be shortened by removing leading zeros: &lt;code&gt;2001:db8:85a3::8a2e:370:7334&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Total address space: 2^128 = 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses (~340 undecillion)&lt;/li&gt;
&lt;li&gt;Format: &lt;code&gt;xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx&lt;/code&gt; where each xxxx is 0-FFFF&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why do we need IPv6?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;IPv4 address exhaustion is the primary driver. With only ~4.3 billion addresses and billions of devices (computers, phones, IoT devices, servers), we've run out of public IPv4 addresses. This has led to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;NAT (Network Address Translation) overuse&lt;/strong&gt;: Multiple devices sharing one public IP, which breaks the end-to-end principle of the internet&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Address scarcity&lt;/strong&gt;: Organizations paying premium prices for IPv4 address blocks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complexity&lt;/strong&gt;: Multiple layers of NAT making networking harder to troubleshoot&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;IPv6 solves this by providing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vast address space&lt;/strong&gt;: Enough addresses for every device on Earth (and trillions more)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplified networking&lt;/strong&gt;: No NAT needed—every device can have a globally routable address&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better performance&lt;/strong&gt;: Simpler packet headers, more efficient routing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Built-in security&lt;/strong&gt;: IPsec support is mandatory in IPv6&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-configuration&lt;/strong&gt;: Devices can automatically configure their addresses (SLAAC)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better mobile support&lt;/strong&gt;: Improved handling of devices moving between networks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The transition:&lt;/strong&gt; While IPv6 is the future, we're in a transition period. Most networks support both (dual-stack), allowing devices to use either protocol. IPv4 will likely remain in use for decades due to legacy systems, but new deployments increasingly prioritize IPv6.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What is a subnet and how does CIDR notation work?
&lt;/h4&gt;

&lt;p&gt;A &lt;em&gt;subnet&lt;/em&gt; defines which part of the address is the "network" and which is the "host". In CIDR notation &lt;code&gt;192.168.1.100/24&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network portion: first 24 bits (192.168.1)&lt;/li&gt;
&lt;li&gt;Host portion: last 8 bits (.100)&lt;/li&gt;
&lt;li&gt;Subnet mask: 255.255.255.0&lt;/li&gt;
&lt;li&gt;Network: 192.168.1.0&lt;/li&gt;
&lt;li&gt;Broadcast: 192.168.1.255&lt;/li&gt;
&lt;li&gt;Total hosts: 254&lt;/li&gt;
&lt;/ul&gt;




&lt;h4&gt;
  
  
  Q: What are common subnet sizes?
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;CIDR&lt;/th&gt;
&lt;th&gt;Subnet Mask&lt;/th&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;th&gt;Hosts&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;/8&lt;/td&gt;
&lt;td&gt;255.0.0.0&lt;/td&gt;
&lt;td&gt;Large enterprise (10.x.x.x)&lt;/td&gt;
&lt;td&gt;16 million&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;/16&lt;/td&gt;
&lt;td&gt;255.255.0.0&lt;/td&gt;
&lt;td&gt;Medium network (172.16.x.x)&lt;/td&gt;
&lt;td&gt;65,534&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;/24&lt;/td&gt;
&lt;td&gt;255.255.255.0&lt;/td&gt;
&lt;td&gt;Typical LAN&lt;/td&gt;
&lt;td&gt;254&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;/32&lt;/td&gt;
&lt;td&gt;255.255.255.255&lt;/td&gt;
&lt;td&gt;Single host&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h4&gt;
  
  
  Q: How do you calculate the number of hosts in a subnet?
&lt;/h4&gt;

&lt;p&gt;The formula is: &lt;strong&gt;2^(host bits) - 2&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Host bits&lt;/strong&gt; = 32 - CIDR prefix (for IPv4)&lt;/li&gt;
&lt;li&gt;Subtract 2 because the network address (all zeros) and broadcast address (all ones) cannot be assigned to hosts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Examples:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;/24 subnet:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Host bits: 32 - 24 = 8 bits&lt;/li&gt;
&lt;li&gt;Total addresses: 2^8 = 256&lt;/li&gt;
&lt;li&gt;Usable hosts: 256 - 2 = &lt;strong&gt;254&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;/16 subnet:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Host bits: 32 - 16 = 16 bits&lt;/li&gt;
&lt;li&gt;Total addresses: 2^16 = 65,536&lt;/li&gt;
&lt;li&gt;Usable hosts: 65,536 - 2 = &lt;strong&gt;65,534&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why not 254 × 254?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A common misconception is that /16 = 254 × 254 = 64,516. This is incorrect because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In a /16 subnet, the host portion is the &lt;strong&gt;last 16 bits&lt;/strong&gt; (the last two octets combined)&lt;/li&gt;
&lt;li&gt;This gives us 2^16 = 65,536 total addresses, not 254 × 254&lt;/li&gt;
&lt;li&gt;The 254 × 254 calculation would only apply if we were thinking of it as two separate /24 subnets, which is not how /16 works&lt;/li&gt;
&lt;li&gt;In a /16, all 16 host bits are used together as one address space&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;/8 subnet:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Host bits: 32 - 8 = 24 bits&lt;/li&gt;
&lt;li&gt;Total addresses: 2^24 = 16,777,216&lt;/li&gt;
&lt;li&gt;Usable hosts: 16,777,216 - 2 = &lt;strong&gt;16,777,214&lt;/strong&gt; (often rounded to "16 million")&lt;/li&gt;
&lt;/ul&gt;




&lt;h4&gt;
  
  
  Q: How does a host decide if a destination is local or remote?
&lt;/h4&gt;

&lt;p&gt;When a host wants to send a packet, it first asks: &lt;em&gt;"Is the destination on my local network?"&lt;/em&gt; Before sending a letter, you check: "Is this address on my street?" If yes, you just walk over and deliver it yourself (ARP and direct delivery). If no, you drop it in the mailbox for the postal service to handle (send to default gateway/router). You don't need to know the entire postal system—just whether it's local or needs to go through the post office!&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What is a router and how does routing work?
&lt;/h4&gt;

&lt;p&gt;A &lt;em&gt;router&lt;/em&gt; connects multiple networks. It uses a &lt;em&gt;routing table&lt;/em&gt; to decide where to send each packet. A router is like a post office sorting facility. When a letter arrives, the postal worker looks at the destination address (IP) and checks the routing table—a big directory that says "letters for 10.0.5.0 go to the downtown post office, letters for 10.0.1.0 go to the local branch." The router doesn't change the address on your envelope (IP stays the same), but it knows which "next post office" (next hop) to send it to.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example routing table:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;10.0.1.0/24&lt;/code&gt; → eth0 (directly connected)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;10.0.2.0/24&lt;/code&gt; → eth1 (directly connected)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;10.0.5.0/24&lt;/code&gt; → via 10.0.2.254 (next hop)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;0.0.0.0/0&lt;/code&gt; → via 203.0.113.1 (default route)&lt;/li&gt;
&lt;/ul&gt;




&lt;h4&gt;
  
  
  Q: Do IP addresses change as packets traverse the network?
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;💡 Key Insight&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;IP addresses never change as packets traverse the network. Only MAC addresses change at each hop.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Why IP addresses stay the same:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;IP addresses are &lt;strong&gt;logical addresses&lt;/strong&gt; that represent the final destination (and source) of the packet. Think of them as the address written on your envelope—the destination address (10.0.5.100) is where you want the letter to ultimately arrive, and the return address (192.168.1.50) is where it came from. These never change because they represent the actual source and destination hosts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why MAC addresses change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;MAC addresses are &lt;strong&gt;physical addresses&lt;/strong&gt; that represent the immediate next hop. At each router or switch, the packet needs to be delivered to the next device in the path. The MAC address is rewritten to point to the next hop's physical interface.&lt;/p&gt;

&lt;p&gt;Think of it like sending a letter from New York to Los Angeles: your envelope has the final destination address written on it (the IP address: "To: 456 Oak Ave, Los Angeles"), which never changes. But at each post office, postal workers add a new routing label (the MAC address) that says "deliver to the next post office's mailroom." These routing labels change at each sorting facility: "Route to Chicago sorting center" → "Route to Denver sorting center" → "Route to LA local post office" → "Deliver to 456 Oak Ave". Each label is specific to the next hop and gets replaced at each facility, while the original addresses on the envelope remain unchanged throughout the journey.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example: Packet traveling from Host A to Host B through two routers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The diagram below shows how a packet travels from Host A (192.168.1.50) to Host B (10.0.5.100) through two routers, demonstrating how MAC addresses change at each hop while IP addresses remain unchanged:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2eedo6g6llafi0l4eb2r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2eedo6g6llafi0l4eb2r.png" alt=" " width="800" height="586"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What happens at each router:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Router receives the Ethernet frame with destination MAC = router's interface MAC&lt;/li&gt;
&lt;li&gt;Router strips off the Ethernet header (Layer 2)&lt;/li&gt;
&lt;li&gt;Router examines the IP header (Layer 3) to see the destination IP&lt;/li&gt;
&lt;li&gt;Router looks up the destination IP in its routing table&lt;/li&gt;
&lt;li&gt;Router determines the next hop (another router or the final destination)&lt;/li&gt;
&lt;li&gt;Router uses ARP (if needed) to find the MAC address of the next hop&lt;/li&gt;
&lt;li&gt;Router creates a &lt;strong&gt;new Ethernet frame&lt;/strong&gt; with:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Source MAC = router's outgoing interface MAC
- Destination MAC = next hop's MAC address
- The original IP packet (unchanged) as the payload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Why this design matters:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;IP addresses&lt;/strong&gt; provide end-to-end addressing: the packet knows where it's going and where it came from, regardless of the path taken&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MAC addresses&lt;/strong&gt; provide hop-by-hop delivery: each device only needs to know how to reach the next device, not the entire path&lt;/li&gt;
&lt;li&gt;This separation allows routing to be flexible: if a router goes down, packets can take a different path, but the IP addresses remain the same&lt;/li&gt;
&lt;li&gt;It enables NAT and other middlebox functions: devices in the middle can see and modify the IP packet if needed, but the fundamental source/destination remain&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Exception: The one case where IP addresses &lt;em&gt;do&lt;/em&gt; change is when NAT is involved. NAT devices (like home routers) rewrite the source IP address (and sometimes destination IP) as packets pass through. However, this is a special case of address translation, not normal routing. In normal routing without NAT, IP addresses remain unchanged.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What is TTL and why is it important?
&lt;/h4&gt;

&lt;p&gt;Every IP packet has a &lt;em&gt;TTL (Time To Live)&lt;/em&gt; field that decrements at each router. If it reaches 0, the packet is dropped. TTL is like a "maximum number of post offices" stamp on your envelope. Every time your letter goes through a post office (router), they stamp it with one less number. If your letter has been through 64 post offices and still hasn't arrived, it's probably lost in a loop somewhere, so the post office throws it away. This prevents letters from bouncing between post offices forever if someone made a routing mistake!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; Host A (TTL=64) → Router 1 (TTL=63) → Router 2 (TTL=62) → Router 3 (TTL=61) → Host B&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What is NAT and how does it work?
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;NAT (Network Address Translation)&lt;/em&gt; allows multiple devices with private IPs to share a single public IP. NAT is like an apartment building's mailroom. You write a letter with your apartment number (private IP like 192.168.1.10) as the return address, but when it goes out to the world, the mailroom clerk &lt;strong&gt;changes the return address&lt;/strong&gt; on the envelope to the building's public address (203.0.113.50) and keeps a note: "Apartment 10's letter is actually from port 40001." When a reply comes back addressed to the building, the clerk looks up their notes and forwards it to your apartment. The outside world never sees your private address!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3d93gi6z6qkc2bys6bqm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3d93gi6z6qkc2bys6bqm.png" alt=" " width="790" height="664"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What are the private IP ranges?
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Private IP ranges (RFC 1918):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;10.0.0.0/8&lt;/code&gt; — Large enterprises&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;172.16.0.0/12&lt;/code&gt; — Medium networks&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;192.168.0.0/16&lt;/code&gt; — Home/small office&lt;/li&gt;
&lt;/ul&gt;




&lt;h4&gt;
  
  
  Q: What is BGP (Border Gateway Protocol)?
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;BGP (Border Gateway Protocol)&lt;/em&gt; is the routing protocol used to exchange routing information between autonomous systems (ASes) on the internet. It's the protocol that makes the internet work by allowing different networks to learn how to reach each other.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What BGP does:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Exchanges routes&lt;/strong&gt;: Routers running BGP tell each other which IP address ranges (prefixes) they can reach&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Path selection&lt;/strong&gt;: BGP uses attributes (AS path, local preference, etc.) to choose the best path among multiple options&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Loop prevention&lt;/strong&gt;: BGP prevents routing loops by tracking which autonomous systems a route has passed through&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Policy enforcement&lt;/strong&gt;: Network administrators can set policies to prefer certain paths or block certain routes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;BGP in different contexts:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Internet BGP (eBGP)&lt;/strong&gt;: Used between different organizations/ISPs on the public internet. This is what connects the entire internet together. Each organization has an Autonomous System Number (ASN) and advertises their IP ranges to peers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Internal BGP (iBGP)&lt;/strong&gt;: Used within a single organization to distribute routes between routers in the same autonomous system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Data center/Cloud BGP&lt;/strong&gt;: Used in modern data centers and cloud environments to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Advertise pod/service IP ranges to network infrastructure&lt;/li&gt;
&lt;li&gt;Enable native routing without overlay encapsulation&lt;/li&gt;
&lt;li&gt;Integrate with cloud provider route tables&lt;/li&gt;
&lt;li&gt;Support large-scale Kubernetes deployments&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;How BGP works (simplified):&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Router A advertises: "I can reach 10.244.0.0/16"&lt;/li&gt;
&lt;li&gt;Router B receives this advertisement and stores it in its routing table&lt;/li&gt;
&lt;li&gt;Router B can now forward packets destined for 10.244.0.0/16 to Router A&lt;/li&gt;
&lt;li&gt;Router B may also advertise this route to other routers (depending on policy)&lt;/li&gt;
&lt;li&gt;If Router A goes down or withdraws the route, Router B removes it and finds an alternative path&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;BGP vs static routes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Static routes&lt;/strong&gt;: Manually configured, don't adapt to changes, don't scale&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BGP&lt;/strong&gt;: Dynamic, automatically adapts to network changes, scales to internet size, supports policies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Further reading:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://datatracker.ietf.org/doc/html/rfc4271" rel="noopener noreferrer"&gt;RFC 4271: A Border Gateway Protocol 4 (BGP-4)&lt;/a&gt; — The BGP specification&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://datatracker.ietf.org/doc/html/rfc7938" rel="noopener noreferrer"&gt;RFC 7938: Use of BGP for Routing in Large-Scale Data Centers&lt;/a&gt; — BGP in data centers&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.ietf.org/rfc/rfc4272.txt" rel="noopener noreferrer"&gt;BGP Best Practices&lt;/a&gt; — Operational best practices&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Linux networking primitives
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Q: What are the Linux building blocks for container networking?
&lt;/h4&gt;

&lt;p&gt;Before Kubernetes can run pods, Linux provides the building blocks for container isolation: network namespaces, veth pairs, bridges, and iptables.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What is a network namespace?
&lt;/h4&gt;

&lt;p&gt;A &lt;em&gt;network namespace&lt;/em&gt; gives a process its own isolated network stack—its own interfaces, routes, and iptables rules. Each container gets its own namespace, completely isolated from the host and other containers.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: How do I list network namespaces?
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# List network namespaces&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;ip netns list
cni-12345678-abcd-1234-abcd-1234567890ab

&lt;span class="c"&gt;# Run a command in a namespace&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;cni-12345678 ip addr

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h4&gt;
  
  
  Q: Can I create a network namespce manually?
&lt;/h4&gt;

&lt;p&gt;Yes, you can create a network namespace (netns) manually on a Linux system. This is a common way to experiment with the low-level building blocks that container runtimes use to provide network isolation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a ns&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns add test-netns

&lt;span class="c"&gt;# Execute commands inside a network namespace&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;test-netns ip &lt;span class="nb"&gt;link &lt;/span&gt;list  
    lo: &amp;lt;LOOPBACK&amp;gt; mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    &lt;span class="nb"&gt;link&lt;/span&gt;/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

&lt;span class="c"&gt;# Bring up the lo interface&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec &lt;/span&gt;my-netns ip &lt;span class="nb"&gt;link set &lt;/span&gt;lo up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h4&gt;
  
  
  Q: How to create isolated network namespace witout sudo?
&lt;/h4&gt;

&lt;p&gt;To create a network namespace without root privileges, you must combine it with a User Namespace. This allows you to map your current unprivileged user to the root user within the isolated environment, granting necessary CAP_NET_ADMIN capabilities to configure that specific network stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;unshare &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; /bin/bash

&lt;span class="c"&gt;# Create an annonymous network namespace&lt;/span&gt;
&lt;span class="c"&gt;# -r (--map-root-user): Maps your current UID to 0 (root) inside the namespace.&lt;/span&gt;
&lt;span class="c"&gt;# -n (--net): Creates a new, empty network namespace.&lt;/span&gt;
&lt;span class="c"&gt;# /bin/bash: Starts a new shell session inside these namespaces.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Note: The &lt;code&gt;ip netns list&lt;/code&gt; command does not list annonymous ns. Instead use &lt;code&gt;lsns -t net&lt;/code&gt; to see the created ns.&lt;/em&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What is a veth pair?
&lt;/h4&gt;

&lt;p&gt;A &lt;em&gt;veth pair&lt;/em&gt; is like a virtual Ethernet cable with two ends. One end stays in the host namespace, the other goes into the container. A veth pair is like a mail slot connecting two rooms. When you drop a letter (packet) into the slot in your room (container), it immediately appears in the other room (host namespace). It's a direct, private connection—like having your own dedicated mail chute that no one else can use.&lt;/p&gt;

&lt;p&gt;The operational status of a veth pair is linked. If one end is set to DOWN, the entire virtual link is considered down, mirroring the behavior of a physical cable being unplugged&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: How to connect two netns with veth pair?
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Create a veth pair: One end will stay on the host, and the other will go into the namespace.
&lt;code&gt;sudo ip link add veth-host type veth peer name veth-ns&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Move one end into the namespace:
&lt;code&gt;sudo ip link set veth-ns netns &amp;lt;namespace_name&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Assign IP addresses: Give both ends an IP in the same private subnet (e.g., 10.1.1.0/24).
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="c"&gt;#Host: &lt;/span&gt;
  &lt;span class="nb"&gt;sudo &lt;/span&gt;ip addr add 10.1.1.1/24 dev veth-host
  &lt;span class="c"&gt;#Namespace: &lt;/span&gt;
  &lt;span class="nb"&gt;sudo &lt;/span&gt;ip netns &lt;span class="nb"&gt;exec&lt;/span&gt; &amp;lt;name&amp;gt; ip addr add 10.1.1.2/24 dev veth-ns&lt;span class="sb"&gt;``&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Bring interfaces up:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  sudo ip link set veth-host up
  sudo ip netns exec &amp;lt;name&amp;gt; ip link set veth-ns up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Access: You can now reach your application by hitting the namespace's IP (10.1.1.2) from the host.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What is a bridge?
&lt;/h4&gt;

&lt;p&gt;A &lt;em&gt;bridge&lt;/em&gt; is a virtual Layer 2 switch inside the kernel. It connects all the veth pairs together. A bridge is like a shared mailroom in an apartment building. Each apartment (container) has its own mail slot (veth pair) connecting to the mailroom (bridge). When you send a letter to your neighbor, you drop it in your slot, it arrives in the mailroom, and the mailroom knows which slot belongs to your neighbor and delivers it there. The mailroom (bridge) learns which apartment (container) is connected to which slot (MAC address) by watching the return addresses on letters.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What is TAP? When do I need TAP vs vEth?
&lt;/h4&gt;

&lt;p&gt;While veth pairs are the standard for connecting two kernel-level entities (like two network namespaces), TAP interfaces are essential for scenarios where the network traffic must be handled by user-space software. vEth is a linux primitive while a file-based interface for raw frames is universal. &lt;/p&gt;

&lt;p&gt;If veth is a direct pneumatic tube (Kernel-to-Kernel), TAP is a digital scanner (Kernel-to-Software). The usecases below shows different types of software.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Usecases:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Virtual Machines (LimaVM, QEMU)&lt;/strong&gt;: Provides the "hardware" network card for a VM. The hypervisor reads frames from the TAP device and injects them into the guest OS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rootless Networking&lt;/strong&gt;: Tools like slirp4netns (used by Podman and Lima's user-v2) use TAP to provide internet access to containers without requiring sudo.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Monitoring&lt;/strong&gt;: Used to capture and analyze raw traffic for security (IDS/Firewalls) without disrupting the actual flow.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Easiest way to remember this is by using the access method.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;Primitives Used&lt;/th&gt;
&lt;th&gt;Why it's used&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Rootful (containers)&lt;/td&gt;
&lt;td&gt;veth pair + Bridge&lt;/td&gt;
&lt;td&gt;Used by standard Docker/Kubernetes; requires root to create virtual links.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Rootful (VMs)&lt;/td&gt;
&lt;td&gt;TAP interface + Bridge&lt;/td&gt;
&lt;td&gt;Used by KVM/Proxmox to link the Kernel to Hypervisor software.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Rootless (Unprivileged)&lt;/td&gt;
&lt;td&gt;TAP interface + slirp4netns&lt;/td&gt;
&lt;td&gt;Used for rootless containers; bypasses root requirements by using user-space networking.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Note: In Rootless mode: Ip address assigned to the TAP interface is not visible to host and the host cannot ping that ip.&lt;/em&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: Does TAP connect to bridge?
&lt;/h4&gt;

&lt;p&gt;A TAP interface (like vnet0) can be created and manually attached to a Linux Bridge (br0) in the host kernel using administrative privileges.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Rootless example&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Imagine your host machine is the building, but you aren't the landlord—you're just a tenant. You aren't allowed to install new pneumatic tubes (veth) or modify the main sorting table (Linux Bridge). Lets take an example of &lt;a href="https://lima-vm.io/docs/config/network/user-v2/" rel="noopener noreferrer"&gt;&lt;strong&gt;Lima VM Network&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The "Virtual Office" (The Lima VM): You are running a VM. It needs a network.&lt;/li&gt;
&lt;li&gt;The TAP Slot (Inside the Namespace): Lima creates a TAP interface inside a private namespace. This is your "Digital Mail Slot."&lt;/li&gt;
&lt;li&gt;The Software Clerk (The User-v2 Daemon): Instead of a physical sorting table, there is a Software Clerk (a process running on your host). This clerk has "hands" on the TAP slot.&lt;/li&gt;
&lt;li&gt;The "Virtual Bridge": If you have two Lima VMs, they both have TAP slots. The Software Clerk holds both TAP slots in its hands. When VM1 sends a letter, the clerk reads it from the first TAP slot and manually "tosses" it into the second TAP slot.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this scenario, the "Bridge" is just a piece of logic inside the clerk’s brain (the software code).&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What is iptables?
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;iptables&lt;/em&gt; (and its successor nftables) is how Linux manipulates network traffic. iptables is like a postal inspector with a rulebook. As letters (packets) flow through the post office (Linux kernel), the inspector checks each one against the rules: "Letters to 10.96.0.100? Change the address to 10.244.1.5. Letters from 10.0.0.5? Block them. Letters to port 80? Route them to port 8080 instead." The inspector can rewrite addresses, block letters, or redirect them—all without the sender or receiver knowing.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What are the iptables chains?
&lt;/h4&gt;

&lt;p&gt;In Linux, the processing of packets follows a strict sequence of tables within each hook. The tables are listed below in their actual order of execution for each hook.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PREROUTING&lt;/strong&gt;: Applied to all incoming packets before a routing decision is made.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;INPUT&lt;/strong&gt;: Applied to packets destined for a local process/socket.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FORWARD&lt;/strong&gt;: Applied to packets routed through the host (Pod-to-Pod on different nodes).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OUTPUT&lt;/strong&gt;: Applied to packets generated by a local process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;POSTROUTING&lt;/strong&gt;: Applied to all outgoing packets after routing is complete.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Table Execution Order by Hook&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Hook (Chain)&lt;/th&gt;
&lt;th&gt;1st Table&lt;/th&gt;
&lt;th&gt;2nd Table&lt;/th&gt;
&lt;th&gt;3rd Table&lt;/th&gt;
&lt;th&gt;4th Table&lt;/th&gt;
&lt;th&gt;5th Table&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;PREROUTING&lt;/td&gt;
&lt;td&gt;raw&lt;/td&gt;
&lt;td&gt;mangle&lt;/td&gt;
&lt;td&gt;nat (DNAT)&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;INPUT&lt;/td&gt;
&lt;td&gt;mangle&lt;/td&gt;
&lt;td&gt;filter&lt;/td&gt;
&lt;td&gt;security&lt;/td&gt;
&lt;td&gt;nat (SNAT*)&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FORWARD&lt;/td&gt;
&lt;td&gt;mangle&lt;/td&gt;
&lt;td&gt;filter&lt;/td&gt;
&lt;td&gt;security&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OUTPUT&lt;/td&gt;
&lt;td&gt;raw&lt;/td&gt;
&lt;td&gt;mangle&lt;/td&gt;
&lt;td&gt;nat (DNAT)&lt;/td&gt;
&lt;td&gt;filter&lt;/td&gt;
&lt;td&gt;security&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POSTROUTING&lt;/td&gt;
&lt;td&gt;mangle&lt;/td&gt;
&lt;td&gt;nat (SNAT)&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Note: The nat table in the INPUT chain was introduced in later kernel versions to allow SNAT for traffic destined for the local host.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table Function Definitions&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Table&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Common Targets&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;raw&lt;/td&gt;
&lt;td&gt;De-prioritizes connection tracking.&lt;/td&gt;
&lt;td&gt;NOTRACK, DROP&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mangle&lt;/td&gt;
&lt;td&gt;Modifies IP header fields (TTL, TOS) or marks packets.&lt;/td&gt;
&lt;td&gt;MARK, TOS, TTL&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;nat&lt;/td&gt;
&lt;td&gt;Changes Source or Destination IP/Ports.&lt;/td&gt;
&lt;td&gt;SNAT, DNAT, MASQUERADE, REDIRECT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;filter&lt;/td&gt;
&lt;td&gt;The "Firewall." Decisions on packet delivery.&lt;/td&gt;
&lt;td&gt;ACCEPT, DROP, REJECT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;security&lt;/td&gt;
&lt;td&gt;Implements SELinux security context marks.&lt;/td&gt;
&lt;td&gt;SECMARK, CONNSECMARK&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h4&gt;
  
  
  Q: How does l2 bridge and iptables work together?
&lt;/h4&gt;

&lt;p&gt;In Linux, Layer 2 (L2) bridges and iptables (which typically operates at Layer 3) work together through a kernel bridge netfilter framework. This interaction allows the system to apply advanced IP-level filtering and NAT to traffic that would otherwise stay purely at the Ethernet frame level. Normally, an L2 bridge forwards traffic based on MAC addresses, bypassing the L3 IP stack where iptables resides. To bridge this gap, Linux uses the br_netfilter kernel module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
sudo sysctl --system

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h4&gt;
  
  
  Q: What is Linux IPVS (IP Virtual Server)?
&lt;/h4&gt;

&lt;p&gt;IPVS (IP Virtual Server) is a Linux kernel feature that provides Layer 4 load balancing. It's built into the Linux kernel and operates at the network layer, making it faster and more efficient than iptables for load balancing scenarios.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How IPVS works:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IPVS creates a virtual IP (VIP) that represents a service&lt;/li&gt;
&lt;li&gt;Traffic to the VIP is distributed across multiple real servers (backend pods) using load balancing algorithms&lt;/li&gt;
&lt;li&gt;IPVS maintains a connection table in kernel memory, tracking active connections&lt;/li&gt;
&lt;li&gt;Load balancing happens in the kernel, avoiding the overhead of userspace processing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;IPVS vs iptables for load balancing:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;iptables&lt;/strong&gt;: Uses NAT rules (DNAT) to rewrite destination IPs. With many services, the iptables rule chain becomes long, and every packet must traverse the chain until it matches. This is O(n) complexity—the more rules, the longer it takes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IPVS&lt;/strong&gt;: Uses a hash table for O(1) lookup of backend servers. More efficient for large numbers of services (thousands). Also supports more load balancing algorithms (round-robin, least connections, source hashing, etc.).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When to use IPVS:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Large clusters with many services (1000+)&lt;/li&gt;
&lt;li&gt;Need better performance and lower latency&lt;/li&gt;
&lt;li&gt;Want more load balancing algorithm options&lt;/li&gt;
&lt;li&gt;Can enable IPVS kernel modules (ip_vs, ip_vs_rr, ip_vs_wrr, ip_vs_sh, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When to use iptables:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Smaller clusters (&amp;lt; 1000 services)&lt;/li&gt;
&lt;li&gt;Simpler setup (no kernel modules needed)&lt;/li&gt;
&lt;li&gt;Default and well-tested option&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Layer 4: Which Application?
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Q: What does Layer 4 add to networking?
&lt;/h4&gt;

&lt;p&gt;Layer 4 adds &lt;em&gt;ports&lt;/em&gt; to identify which application should receive the data.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What are ports and why do we need them?
&lt;/h4&gt;

&lt;p&gt;A single IP can run many services. &lt;em&gt;Ports&lt;/em&gt; (0-65535) identify each one. An IP address is like a building address, and ports are like apartment numbers or department mailboxes. When you send mail to "123 Main St, Apartment 80" (IP:port), the mailroom knows to deliver it to the web server department (port 80), not the database department (port 5432). One building (one IP) can have many departments (many ports), each handling different types of mail!&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What are some well-known ports?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;22 → SSH&lt;/li&gt;
&lt;li&gt;80 → HTTP&lt;/li&gt;
&lt;li&gt;443 → HTTPS&lt;/li&gt;
&lt;li&gt;53 → DNS&lt;/li&gt;
&lt;/ul&gt;




&lt;h4&gt;
  
  
  Q: What is a 5-tuple?
&lt;/h4&gt;

&lt;p&gt;A connection is uniquely identified by the &lt;em&gt;5-tuple&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Protocol: TCP or UDP&lt;/li&gt;
&lt;li&gt;Source IP&lt;/li&gt;
&lt;li&gt;Source Port&lt;/li&gt;
&lt;li&gt;Destination IP&lt;/li&gt;
&lt;li&gt;Destination Port&lt;/li&gt;
&lt;/ul&gt;




&lt;h4&gt;
  
  
  Q: What's the difference between TCP and UDP?
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;TCP&lt;/th&gt;
&lt;th&gt;UDP&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Connection&lt;/td&gt;
&lt;td&gt;Connection-oriented (handshake first)&lt;/td&gt;
&lt;td&gt;Connectionless (fire and forget)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reliability&lt;/td&gt;
&lt;td&gt;Guaranteed delivery, ordering&lt;/td&gt;
&lt;td&gt;No guarantees&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Use case&lt;/td&gt;
&lt;td&gt;HTTP, SSH, Database queries&lt;/td&gt;
&lt;td&gt;DNS, Video streaming, VXLAN tunnels&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Overhead&lt;/td&gt;
&lt;td&gt;Higher (acknowledgments, retries)&lt;/td&gt;
&lt;td&gt;Lower (just send it)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Header size&lt;/td&gt;
&lt;td&gt;20+ bytes (with options)&lt;/td&gt;
&lt;td&gt;8 bytes (fixed)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;TCP is like &lt;strong&gt;registered mail with delivery confirmation&lt;/strong&gt;. You send a letter, the recipient signs for it and sends back a confirmation card. If you don't get the confirmation, you send another letter. The postal service guarantees your letter arrives in order. UDP is like &lt;strong&gt;regular mail&lt;/strong&gt;—you drop it in the mailbox and hope it gets there. It's faster and cheaper, but there's no guarantee. For important documents (web pages, database queries), you use registered mail (TCP). For quick notes where losing one doesn't matter (video streaming, DNS lookups), you use regular mail (UDP).&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What is the TCP three-way handshake?
&lt;/h4&gt;

&lt;p&gt;Before TCP can send data, it establishes a connection through a three-way handshake. This ensures both sides are ready to communicate and agree on initial sequence numbers. Think of it like a phone call: you dial (SYN), the other person picks up and says "hello" (SYN-ACK), and you confirm "yes, I can hear you" (ACK). Only then do you start talking.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzuxx4lwktcdx62bfpwp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzuxx4lwktcdx62bfpwp.png" alt=" " width="538" height="497"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What is the TCP connection termination process?
&lt;/h4&gt;

&lt;p&gt;TCP uses a four-way handshake to gracefully terminate a connection:&lt;/p&gt;

&lt;p&gt;FIN: The sender sends a FIN (finish) to indicate it has no more data to send.&lt;br&gt;
ACK: The receiver acknowledges the FIN.&lt;br&gt;
FIN: The receiver sends its own FIN to indicate it has finished sending data.&lt;br&gt;
ACK: The sender acknowledges the receiver's FIN.&lt;/p&gt;


&lt;h4&gt;
  
  
  Q: What is the TCP sliding window and why we need it?
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Why We Use It&lt;/strong&gt;&lt;br&gt;
Without a sliding window, TCP would be "Stop-and-Wait": the sender would send one packet and wait for an acknowledgment (ACK) before sending the next. This would be incredibly slow, especially on high-latency links.&lt;br&gt;
The sliding window solves two problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Throughput Efficiency: It allows the sender to have multiple packets "in flight" at once, filling the network "pipe."&lt;/li&gt;
&lt;li&gt;Buffer Protection: It prevents the receiver's memory buffer from overflowing. If the receiver's application is slow (e.g., a slow disk write), the window shrinks to tell the sender to slow down.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How It Works&lt;/strong&gt;&lt;br&gt;
The control of the sliding window is a dynamic "handshake" between the receiver and the sender.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Receiver's Role (Flow Control)&lt;/strong&gt;&lt;br&gt;
The receiver controls the window size through a field in the TCP header called the Receive Window (rwnd).&lt;br&gt;
Advertising: In every ACK sent back to the sender, the receiver includes the current size of its available buffer.&lt;br&gt;
&lt;strong&gt;Zero Window:&lt;/strong&gt; If the receiver's buffer is completely full, it sends an ACK with a window size of 0. The sender then stops transmitting and periodically sends "Zero Window Probes" to see if space has opened up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Sender's Role (Congestion Control)&lt;/strong&gt;&lt;br&gt;
The sender does not just blindly follow the receiver's advertised window. It maintains its own internal limit called the Congestion Window (cwnd), based on how much the network (routers/switches) can handle.&lt;br&gt;
The Formula: The actual amount of data sent is always min(rwnd, cwnd).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scaling (Window Scaling)&lt;/strong&gt;&lt;br&gt;
The original TCP specification limited the window size to 65,535 bytes (64 KB). On modern high-speed networks (10Gbps+), this is too small.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TCP Window Scale Option:&lt;/strong&gt; This allows the window to be scaled up to 1 GB.&lt;br&gt;
&lt;strong&gt;Configuration:&lt;/strong&gt; On Linux, this is controlled by the sysctl parameter: &lt;code&gt;net.ipv4.tcp_window_scaling = 1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tuning the Buffers&lt;/strong&gt;&lt;br&gt;
While the window slides automatically, you control the maximum potential size of that window by adjusting the Linux network buffer limits:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Read Buffer: net.ipv4.tcp_rmem (min, default, max)
  Write Buffer: net.ipv4.tcp_wmem (min, default, max)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By increasing these values in /etc/sysctl.conf, you allow the sliding window to grow larger, which is essential for high-latency, high-bandwidth connections (like communicating between data centers across continents)&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 2: Network Virtualization Technologies
&lt;/h2&gt;

&lt;h3&gt;
  
  
  VLAN: Network Segmentation
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Q: What is VLAN?
&lt;/h4&gt;

&lt;p&gt;A &lt;em&gt;VLAN (Virtual Local Area Network)&lt;/em&gt; is a logical network segment created within a physical network. It allows you to group devices together logically, even if they're not physically connected to the same switch. VLANs are identified by a VLAN ID (a number from 1-4094) that is added to Ethernet frames as a tag. Think of VLANs as creating separate "virtual neighborhoods" within the same physical building—devices in VLAN 10 can't directly communicate with devices in VLAN 20, even though they might be connected to the same physical switch, just like people in different apartment buildings on the same street.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e54vynq5vgpmm20qgqz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e54vynq5vgpmm20qgqz.png" alt=" " width="289" height="492"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What problem did VLANs solve?
&lt;/h4&gt;

&lt;p&gt;In the early days, Ethernet was a "flat" network where every device heard everyone else's broadcasts. W. David Sincoskie invented the &lt;a href="https://www.networkworld.com/article/963787/what-is-a-vlan-and-how-does-it-work.html" rel="noopener noreferrer"&gt;VLAN at Bellcore in the 1980s&lt;/a&gt; to break these large, noisy broadcast domains into smaller, manageable logical groups. The technology was later standardized as IEEE 802.1Q.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: How do VLANs work?
&lt;/h4&gt;

&lt;p&gt;A VLAN adds a &lt;em&gt;4-byte 802.1Q tag&lt;/em&gt; to the Ethernet frame. The switch reads this tag and only forwards the frame to ports in the same VLAN. Think of VLANs as colored envelopes. When you send a letter in a blue envelope (VLAN 10), the mail carrier (switch) only delivers it to mailboxes that accept blue envelopes. Letters in green envelopes (VLAN 20) go to different mailboxes. Even though all the mailboxes are on the same street (same physical switch), the colored envelopes keep the mail separated—blue letters never mix with green letters.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: Can you walk through a VLAN example?
&lt;/h4&gt;

&lt;p&gt;When Host A (192.168.10.5) sends to Host B (192.168.10.6) on VLAN 10, the switch reads the 802.1Q tag and forwards the frame only to ports in VLAN 10, ensuring Host C on VLAN 20 never sees the traffic:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ficlvy2i0f4jzxp73k1de.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ficlvy2i0f4jzxp73k1de.png" alt=" " width="800" height="507"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What were the constraints of VLANs?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Physical port binding&lt;/em&gt;: VLANs were tied to the physical switch port. If you moved your desk, a network engineer had to manually reconfigure the switch.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;The 4,094 ceiling&lt;/em&gt;: With only a 12-bit ID, you could only have 4,094 usable networks—plenty for an office, but a disaster for the upcoming cloud era.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;VLANs were like having only 4,094 different envelope colors available. Once you used all the colors, you couldn't create new networks. Also, if you moved to a different building (different switch port), you had to tell the mailroom "I'm now using blue envelopes instead of green," and they had to manually update their records. This didn't work well when people (VMs) were moving constantly!&lt;/p&gt;




&lt;h3&gt;
  
  
  VXLAN: Network Virtualization
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Q: What is VXLAN?
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;VXLAN (Virtual eXtensible Local Area Network)&lt;/em&gt; is a network virtualization technology that encapsulates Layer 2 Ethernet frames inside Layer 3 UDP packets. This creates an "overlay network" that allows VMs and containers to communicate as if they're on the same local network, even when they're on different physical servers or data centers. VXLAN uses a 24-bit Virtual Network Identifier (VNI) to create up to 16.7 million logical networks, far exceeding VLAN's 4,094 limit. The key innovation is that VXLAN decouples the logical network from the physical network infrastructure—VMs can move between physical servers without changing their network identity, and the physical network only sees IP traffic between servers, not the virtual network details.&lt;/p&gt;

&lt;p&gt;Think of VXLAN like putting an envelope inside another envelope. You write your letter (original L2 frame with VM's MAC addresses) and put it in an inner envelope addressed to the destination VM. Then you put that inner envelope inside an &lt;strong&gt;outer envelope&lt;/strong&gt; addressed to the destination server (VTEP IP address). The postal service (physical network) only looks at the outer envelope and delivers it to the server. The server then opens the outer envelope, takes out the inner envelope, and delivers it to the VM. The postal service never sees what's inside—they just see mail between servers!&lt;/p&gt;

&lt;p&gt;In a multi-tenant environment, VXLAN is a cornerstone. It allows different tenants to have their own logically isolated networks (using unique VNIs) that share the same underlying physical infrastructure, preventing tenants from seeing each other's traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94i8izznxun7zhar7ipr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94i8izznxun7zhar7ipr.png" alt=" " width="461" height="810"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What is the VXLAN architecture?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Overlay Network (Virtual)&lt;/strong&gt;: VMs think they're on the same L2 segment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Underlay Network (Physical)&lt;/strong&gt;: Physical network routes between VTEPs (VXLAN Tunnel End Points)&lt;/li&gt;
&lt;/ul&gt;




&lt;h4&gt;
  
  
  Q: How many networks does VXLAN support?
&lt;/h4&gt;

&lt;p&gt;The physical switches just saw traffic between servers, while the VMs felt like they're on one giant, &lt;em&gt;16.7-million-segment&lt;/em&gt; logical switch (thanks to the 24-bit VNI: 2^24 = 16,777,216 possible networks).&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: Can you walk through a VXLAN example flow?
&lt;/h4&gt;

&lt;p&gt;The diagram below shows the complete VXLAN encapsulation and decapsulation process:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zp18pv9zrpdf38u8r30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zp18pv9zrpdf38u8r30.png" alt=" " width="800" height="557"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: Why are they called tunnels?
&lt;/h4&gt;

&lt;p&gt;They are called tunnels because they create a private, direct-path "shortcut" for your data through an existing network, similar to how a physical tunnel allows a car to pass through a mountain instead of driving over every peak. In networking, a "tunnel" isn't a physical wire; it is a logical path created by &lt;em&gt;encapsulation&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Here's how it works: You write a letter to your friend (original L2 frame with VM MAC addresses). You put it in an inner envelope addressed to your friend's apartment (destination VM). Then you put that inner envelope inside an &lt;strong&gt;outer envelope&lt;/strong&gt; addressed to your friend's building (VTEP IP address). The outer envelope has a special label (VNI) that says "Building 5000" so the receiving building knows which floor to deliver it to. The postal service (physical network) only looks at the address on the &lt;strong&gt;outer envelope&lt;/strong&gt;. They see "Deliver to Building 10.0.0.2" and route it there. They have no idea there's another envelope inside, or that it's really meant for someone in apartment 192.168.10.6. When the letter arrives at Building 10.0.0.2, the mailroom (VTEP) opens the outer envelope, reads the VNI label ("Building 5000"), and delivers the inner envelope to the correct apartment (VM). Your friend receives the letter as if you sent it directly—they never see the outer envelope! VXLAN is the "outer envelope," and the VTEP (VXLAN Tunnel End Point) is the "building's mailroom" that handles the envelope wrapping and unwrapping.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: How do VTEPs discover each other?
&lt;/h4&gt;

&lt;p&gt;VXLAN requires a &lt;em&gt;control plane&lt;/em&gt; to map VM MAC addresses to VTEP IP addresses. Common approaches:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Multicast&lt;/td&gt;
&lt;td&gt;VTEPs join multicast groups per VNI. Broadcast ARP requests are sent via multicast. Simple but requires multicast support in underlay.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;BGP-EVPN&lt;/td&gt;
&lt;td&gt;BGP extensions for Ethernet VPN (RFC 7432). VTEPs exchange MAC/IP routes via BGP. Used in large-scale deployments (Cisco ACI, Juniper).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Centralized Controller&lt;/td&gt;
&lt;td&gt;SDN controller (e.g., VMware NSX, OpenStack Neutron) maintains MAC-to-VTEP mappings. VTEPs query controller for unknown destinations.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Distributed Database&lt;/td&gt;
&lt;td&gt;etcd or similar stores MAC-to-VTEP mappings. Used by container networking plugins.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h4&gt;
  
  
  Q: Why does VXLAN use UDP?
&lt;/h4&gt;

&lt;p&gt;VXLAN uses &lt;em&gt;UDP&lt;/em&gt; (User Datagram Protocol) as its transport protocol for several important reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reliability is handled at a higher layer&lt;/strong&gt;: The inner Ethernet frame already contains TCP/IP traffic, which provides its own reliability mechanisms. If a TCP packet inside the VXLAN tunnel is lost, TCP will retransmit it. Adding TCP reliability at the tunnel level would create redundant retransmissions and actually hurt performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lower overhead&lt;/strong&gt;: UDP has a fixed 8-byte header compared to TCP's 20+ byte header (which can grow with options). For tunnel traffic that may carry thousands of packets per second, this overhead reduction matters significantly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hardware offloading&lt;/strong&gt;: Modern network interface cards (NICs) can offload UDP encapsulation/decapsulation to hardware, improving performance. TCP's stateful nature makes hardware offloading more complex and less efficient.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No connection state&lt;/strong&gt;: UDP is connectionless, meaning there's no connection establishment (three-way handshake) or teardown overhead. This is crucial for tunnel traffic where you want to forward packets as quickly as possible without maintaining connection state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Avoids TCP-in-TCP problems&lt;/strong&gt;: If VXLAN used TCP, you'd have TCP inside TCP. This creates problems like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Head-of-line blocking: If one TCP segment is lost, all subsequent segments wait&lt;/li&gt;
&lt;li&gt;Congestion control conflicts: Inner and outer TCP connections compete&lt;/li&gt;
&lt;li&gt;Retransmission storms: Both layers trying to retransmit the same data&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h4&gt;
  
  
  Q: What are the MTU and fragmentation considerations for VXLAN?
&lt;/h4&gt;

&lt;p&gt;VXLAN encapsulation adds approximately 50 bytes to each packet:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Outer Ethernet: 14 bytes&lt;/li&gt;
&lt;li&gt;Outer IP: 20 bytes&lt;/li&gt;
&lt;li&gt;Outer UDP: 8 bytes&lt;/li&gt;
&lt;li&gt;VXLAN header: 8 bytes&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Total overhead: ~50 bytes&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the underlay MTU is 1500 bytes (standard Ethernet), the effective overlay MTU becomes 1450 bytes. Packets larger than this will be fragmented, causing performance degradation.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: How do I avoid VXLAN fragmentation?
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;💡 Avoiding VXLAN Fragmentation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Configure underlay MTU to 1550+ bytes (jumbo frames) to avoid fragmentation, or reduce overlay MTU to 1450 bytes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;⚠️ Operator smell for MTU issues&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cross-node traffic works for small payloads but gRPC/HTTPS calls with larger bodies RST or time out. Quick test: &lt;code&gt;ping -M do -s 1472 &amp;lt;remote-node-ip&amp;gt;&lt;/code&gt;; if it fails, drop pod MTU to 1450 or raise underlay MTU.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h4&gt;
  
  
  Q: What are the constraints of VXLAN?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Fixed 8-byte header&lt;/em&gt;: No room for custom metadata beyond the VNI&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Limited extensibility&lt;/em&gt;: Can't carry security policies or telemetry inline&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Control plane dependency&lt;/em&gt;: Requires additional infrastructure for MAC-to-VTEP discovery&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Geneve: Extensible Network Virtualization
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Q: What problem did Geneve solve?
&lt;/h4&gt;

&lt;p&gt;As we moved into containers and cloud-native platforms, even VXLAN started to show its age. Modern platforms needed to carry more than just a "Network ID"—they needed to carry security policies, telemetry, and "who is talking to whom".&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What is Geneve?
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://datatracker.ietf.org/doc/html/rfc8926" rel="noopener noreferrer"&gt;Geneve (Generic Network Virtualization Encapsulation)&lt;/a&gt; arrived to solve the "fixed header" problem of VXLAN. Its extensible design allows developers to add custom data (Type-Length-Value options) to every packet, which is critical for the complex routing and security required by modern SDN platforms like VMware NSX and cloud-native networking solutions.&lt;/p&gt;

&lt;p&gt;Geneve is like VXLAN's envelope-inside-envelope, but with &lt;strong&gt;sticky notes attached to the outer envelope&lt;/strong&gt;. You still put your letter (original packet) in an inner envelope, then put that in an outer envelope. But now you can attach metadata stickers to the outer envelope: "Security Policy: Allow-123", "Source: frontend-workload", "Telemetry: latency-tracked". The receiving building (VTEP) reads these stickers before opening the envelope, so it knows how to handle the letter—check security permissions, log metrics, route based on identity. VXLAN's outer envelope was blank except for the address; Geneve's outer envelope is covered in useful information!&lt;/p&gt;

&lt;p&gt;For enterprise Kubernetes multi-tenancy, Geneve's extensible TLV options become crucial for enforcing fine-grained network policies and carrying tenant-specific metadata, allowing a single underlying network to enforce diverse security rules for multiple isolated tenants.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key difference from VXLAN:&lt;/strong&gt; VXLAN has a fixed 8-byte header, while Geneve has a variable-length header (8+ bytes) that can include TLV options for extensibility. This allows Geneve to carry metadata like security policies and telemetry inline with each packet.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What are examples of Geneve TLV options?
&lt;/h4&gt;

&lt;p&gt;Geneve TLV options can carry various types of metadata. Common examples include: &lt;strong&gt;Security Policy ID&lt;/strong&gt; (Class 0x0102, Type 1) containing policy identifiers like "policy-xyz-123"; &lt;strong&gt;Telemetry Data&lt;/strong&gt; (Class 0x0103, Type 2) with metrics such as "latency=5ms, hop=3"; and &lt;strong&gt;Source Identity&lt;/strong&gt; (Class 0x0104, Type 3) identifying workloads like "workload=frontend-abc". These options allow the network to enforce security policies and collect observability data at the packet level without requiring separate control plane messages.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: Can you show an example of Geneve with security metadata?
&lt;/h4&gt;

&lt;p&gt;In cloud-native environments, Geneve TLV options carry security policies and source identity. When the frontend workload sends a packet, it's like writing a letter and putting it in an inner envelope. The SDN controller (like a security guard) checks the sender's ID, looks up the security policy, and attaches stickers to the outer envelope: "From: frontend-workload", "Policy: allow-frontend-to-backend", "Security Level: High". When the letter arrives at the destination building, the security guard there reads the stickers, verifies "Yes, frontend is allowed to talk to backend," and only then opens the envelope and delivers it. If the stickers said "Deny," the letter would be rejected without even opening it!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F799n90xvhtp16kjicx0u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F799n90xvhtp16kjicx0u.png" alt=" " width="800" height="512"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What are the MTU considerations for Geneve?
&lt;/h4&gt;

&lt;p&gt;Geneve overhead is variable due to TLV options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Base overhead: ~38 bytes (Ethernet + IP + UDP + Geneve base header)&lt;/li&gt;
&lt;li&gt;TLV options: 0-252 bytes (variable)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Total overhead: 38-290 bytes&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Configure underlay MTU accordingly. For example, with 100 bytes of TLV options, you need at least 1600 bytes MTU to avoid fragmentation.&lt;/p&gt;




&lt;h4&gt;
  
  
  Q: What are the constraints of Geneve?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Processing overhead&lt;/em&gt;: Variable-length options require more parsing than fixed headers&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Hardware support&lt;/em&gt;: Older NICs may not offload Geneve efficiently, especially with TLV options&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Complexity&lt;/em&gt;: TLV parsing adds CPU overhead compared to VXLAN's simple header&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Standards and RFCs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;RFC 826: Ethernet Address Resolution Protocol (ARP)&lt;/li&gt;
&lt;li&gt;RFC 791: Internet Protocol (IP)&lt;/li&gt;
&lt;li&gt;RFC 793: Transmission Control Protocol (TCP)&lt;/li&gt;
&lt;li&gt;RFC 768: User Datagram Protocol (UDP)&lt;/li&gt;
&lt;li&gt;RFC 1918: Address Allocation for Private Internets&lt;/li&gt;
&lt;li&gt;RFC 7348: Virtual eXtensible Local Area Network (VXLAN)&lt;/li&gt;
&lt;li&gt;RFC 8926: Geneve: Generic Network Virtualization Encapsulation&lt;/li&gt;
&lt;li&gt;RFC 7432: BGP MPLS-Based Ethernet VPN (BGP-EVPN)&lt;/li&gt;
&lt;li&gt;IEEE 802.1Q: Virtual Bridged Local Area Networks&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;This article is part of the "Learning in a Hurry" series, designed to help engineers quickly understand complex technical concepts through analogies and practical examples.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>linux</category>
      <category>cloud</category>
      <category>networking</category>
    </item>
    <item>
      <title>Improve your creativity by deliberate practice</title>
      <dc:creator>Yuva</dc:creator>
      <pubDate>Wed, 05 Jun 2019 23:57:39 +0000</pubDate>
      <link>https://forem.com/ypeavler/five-tips-to-get-your-creative-juices-flowing-181o</link>
      <guid>https://forem.com/ypeavler/five-tips-to-get-your-creative-juices-flowing-181o</guid>
      <description>&lt;p&gt;My name is Yuva and I am a Software Engineer. I have been engineering and product-ing for 14 years. I have never thought of me as a creative person.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;I looked at people playing music that are in their element, eyes closed and their face full of joy, completely immersed in what they are doing. They are in creative flow!&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I yearned for that feeling of creative flow. In reality, I have experienced that feeling in my software engineering career few times. My very first time was in 2007, I solved a problem in a very creative way. I was brimming with joy. I put my hand in the air and punched it few times. I high-fived every person that passed me. I was a very shy developer back then so it was a big deal. Those experiences were few and far between. I want to experience the creative bursts very often.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finggxtvwm4rgtqr8zltl.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finggxtvwm4rgtqr8zltl.gif" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I know one thing for sure. I work hard and I was determined to learn any skill necessary to get to experience that emotional state again and often.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Do you think you are creative?&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;If you don't, I am going to try to make you believe you are. If you do think you are creative, I am going to give you some tools that worked for me.&lt;/p&gt;

&lt;h2&gt;
  
  
  Talent and Creativity
&lt;/h2&gt;

&lt;p&gt;First thing I learned is that I confused talent for creativity.  Talent is something we are innately capable of doing. Some talents are useful like that of an NBA legend Wilt Chamberlin. When you are towering 7ft in high-school, you may as well play basketball.&lt;/p&gt;

&lt;p&gt;Some talents are not so useful. I can do a wall sit for 10 mins.&lt;/p&gt;

&lt;p&gt;Like the old proverb goes&lt;br&gt;
    &lt;code&gt;Talent is like an asshole, everyone has one&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Wait!? that does not sound right. oh! Well, I am going to run with it.&lt;/p&gt;

&lt;p&gt;Talent and creativity are not the same thing. Creativity is not an elusive thing that only a select few people posses. Creativity belongs to all of us and we make many creative decisions everyday, often not consciously.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is creativity?
&lt;/h2&gt;

&lt;p&gt;Scientists explain creativity as making connections between seemingly odd or different ideas from the pool of knowledge we store in our brains to solve a new problem or create something that is novel, good and useful.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Its our brains doing what they do&lt;/code&gt; says  Michael Grybko, Research Scientist, University of Washington&lt;/p&gt;

&lt;p&gt;I think I got this one. I started seeing &lt;code&gt;The Eureka Moment&lt;/code&gt; everywhere. Entertainment industry has given us so many examples of creative genius at play. Like the famous middle-out compression algorithm idea that Richard Hendricks(&lt;em&gt;Silicon Valley&lt;/em&gt;) gets from seemingly unrelated conversation.&lt;br&gt;&lt;br&gt;
I cannot unsee it. It has almost become predictable. There is a team that  struggles to solve a complicated problem and the protagonist would watch someone do something very normal like &lt;code&gt;pouring a coffee&lt;/code&gt;, &lt;code&gt;bouncing a ball&lt;/code&gt;, &lt;code&gt;fighting with a vending machine&lt;/code&gt; and the protagonist's eyes will light up. Cue the eureka moment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3orblt1jrnsbnd0ddwtt.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3orblt1jrnsbnd0ddwtt.gif" width="500" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If creativity is a natural brain activity? Why am I not generating a lot of creative ideas on a daily basis?&lt;/p&gt;

&lt;p&gt;Before I try to figure out the answer, let's bust some more myths.&lt;/p&gt;

&lt;h2&gt;
  
  
  Myth busting time
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I have to be right brained&lt;br&gt;
This is one of the &lt;a href="https://www.verywellmind.com/left-brain-vs-right-brain-2795005" rel="noopener noreferrer"&gt;physcology fads&lt;/a&gt; that was disproportionately exaggerated like the Myers briggs personality types.Creativity required both sides of the brain.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;I need drugs&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I wanted this to be true so bad. I had convinced myself, when I take ayahuasca or LSD, all truths will be revealed. Unfortunately science does not backup this claim. I have tried CBD and it has only put me to sleep so far.&lt;/li&gt;
&lt;li&gt;LSD and Cannabis is mostly associated the creatives. This is one for the books where correlation is mistaken for causation.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;I have to wait for the apple to hit my head&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Even without trying very hard, I have experienced bouts of creative moments few times in my life. Advances in neuroscience  had made me believe that I do not have to wait around for the next bright idea to fall in my lap.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;I feel so educated already. I understood creativity as science and tv describes it. I have busted some myths I had for long time. I feel ready for the next step. How to deliberately generate creative ideas?&lt;/p&gt;

&lt;p&gt;I know my brain is constantly making connections. I dreamt about a problem I had at work and my cat, lullaby, was in my dream solving it. She is a very smart cat but I don't think I can use that idea in real life.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creativity de-mystified
&lt;/h2&gt;

&lt;p&gt;Let's break down the definition of creativity.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvbbwd914xk8yyjkdjadt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvbbwd914xk8yyjkdjadt.png" width="800" height="780"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Memory&lt;/p&gt;

&lt;p&gt;More things I have in memory, more likely that I can make a connection. I used to remember phone numbers of all my friends and relatives in memory. Our brains are constantly collecting information and encoding it to create short term memory. When things are repetitive, the brain skips remembering the details.&lt;br&gt;
We are more likely to convert the short term memory into long term memory when there is an association or impact. Most of us remember exactly where we were, even what we were wearing or whom we were with when 9/11 happened.&lt;br&gt;
The long term declarative memory is where we keep our pool of information and this is very important to fuel creativity. &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The knowledge pool&lt;br&gt;
Next piece of the creativity puzzle is the knowledge. In-order make connections between loosely coupled things, we need to know about those things. If I do not have any knowledge about the domain that I am working in, it is highly unlikely that I will come up with creative solutions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Recall&lt;br&gt;
This is the part where the brain makes connections between loosely related topics to come up with creative ideas. This is the most important and difficult part of becoming deliberately creative. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Five practical ways to start improving our ability to make creative neural connections.
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Improve declarative memory&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Chunking techniques&lt;/p&gt;

&lt;p&gt;Even though chunking techniques are associated with improving the working memory, chunking involves searching for patterns to chunk, noticing patterns and remembering them are valuable exercises that help in improving declarative memory and recall.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;br&gt;
Consciousness and chunking allow us to turn the dull sludge of independent episodes in our lives into a shimmering, dense web, interlinked by all the myriad patterns we spot. It becomes a positive feedback loop, making the detection of new connections even easier, and creates a domain ripe for understanding how things actually work, of reaching that supremely powerful realm of discerning the mechanism of things. At the same time, our memory system becomes far more efficient, effective — and intelligent — than it could ever be without such refined methods to extract useful structure from raw data.&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.theatlantic.com/health/archive/2012/09/using-pattern-recognition-to-enhance-memory-and-creativity/261925/" rel="noopener noreferrer"&gt;Deep reading on pattern recognition&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Memory palace technique&lt;br&gt;
A Memory Palace is an imaginary location in the mind to store mnemonic images. The most common type of memory palace involves making a journey through a place well known to the person, like a building or town.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using the human connection&lt;br&gt;
Teach someone else or engage in a debate about any random topic. This also helps in providing another datapoint for getting that information into the declarative memory.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Improving the knowledge pool&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; Be curious about the world around.
Be present. I started to notice things when I go on walks and ask a simple why questions. This would lead me into long wikipedia trail from which I retain some of it. I end up sharing what I learned with my wife. I recently learned our nervous system is only 500mil years old while life existed for 4 billion years. Fascinating! right?&lt;/li&gt;
&lt;li&gt; Read books / listen to podcasts.
Select some books and podcasts that are unrelated or distantly related from the main field of interest. Psychology and economics related podcasts and books are my interests outside of software engineering.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Take a walk before switching contexts or learning new things&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before going into any important meetings, especially brainstorming sessions, take a small walk to clear the mind, breathe deep and think about the topic of the meeting. This seems very simple but there has been experiments done to show the benefits of increasing the blood flow and just changing the scenery in taking on challenging tasks. I have the calendar set to always schedule my meetings to end in 10 mins before an hour. Even though we like to group the meetings, make sure to give at-least 10 minute break between them and go on a walk.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Multitask at a slower phase&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I have been doing this for a while. In the beginning, I picked two projects with deadlines. This is the wrong way to do it. Have one active project that has deadlines, namely an engineering project at work and pick another on that I can do in leisure. Writing and making lunch n learn presentations have been my side project for a bit and I am having a blast. I have found out that I really enjoy writing and every time I needed a break from work, I start researching my writing topic or start writing/editing. Sometimes wrong answer is stuck in in head. Changing context helps to flush the stuck solution. &lt;/li&gt;
&lt;li&gt;&lt;code&gt;Easy to think outside of the box if you can move from one box to the other&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Many successful creative people have serious hobbies.  Richard Feynman was obsessed with cracking safes or Charles Darwin was obsessed with earthworms.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Marinate the ideas&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The early mornings and right before bed times are prime time for making neural connections. I love waking up at 5:30AM but stay in that lucid state while thinking about my presentations or ways to represent a complex idea or think of the things to do for the day.&lt;/li&gt;
&lt;li&gt;Think about the problem you are working on right before you go to sleep or just before you completely wake up. The brain is yet to be bombarded with tons of sensory information that it can process the problem at hand easier.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Creativity in teams
&lt;/h2&gt;

&lt;p&gt;Creativity can come from groups not just individuals. If we want to be creative as a team, we should also practice the creativity boosting strategies together.&lt;/p&gt;

&lt;p&gt;Brainstorming is one of the common tools used by companies to explore the group creativity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tips to improve brainstorming:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;There are no bad ideas&lt;/code&gt; - is a bad idea
There are bad ideas and we need to call them out. Not to humiliate the person providing the idea but to set a standard for the ideas that are being generated. This can only be possible if there is a deep understanding between the members and everyone feels safe to throw out half baked ideas while also be willing to be criticized. Structure and safety are two most important elements to conduct a productive brainstorming meeting.&lt;/li&gt;
&lt;li&gt;Go for quantity of ideas rather than quality&lt;/li&gt;
&lt;li&gt;Build on top each other's ideas.
The group owns all the ideas that they generated and keeping  the generation and validation separate helps remove the biases. The person that generated the idea may not be the same person that is championing for the idea. This way, an idea gets its attention whether the idea is generated by a very shy introvert in the room or someone that commands attention by their communication skills.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Thank you for reading! I would love to receive any comments or feedback to improve the content or my writing style. Please leave a comment if you want me to write another article expanding on creativity in groups.&lt;/p&gt;

</description>
      <category>creativity</category>
      <category>productivity</category>
    </item>
    <item>
      <title>A Beginners’s Guide to Serverless, Faas, and serverless web architecture</title>
      <dc:creator>Yuva</dc:creator>
      <pubDate>Tue, 21 May 2019 16:18:21 +0000</pubDate>
      <link>https://forem.com/ypeavler/a-beginners-s-guide-to-serverless-faas-and-serverless-web-architecture-1b86</link>
      <guid>https://forem.com/ypeavler/a-beginners-s-guide-to-serverless-faas-and-serverless-web-architecture-1b86</guid>
      <description>&lt;p&gt;Lately we hear about serverless everywhere, as a natural progression from monolith to microservices to serverless.&lt;/p&gt;

&lt;p&gt;My favorite definition of Serverless is from urban dictionary&lt;/p&gt;

&lt;p&gt;&lt;code&gt;The name of a fad everyone loves to hate. Where the architectural model is shifted from running processes to running functions with *no control* or need to integrate with the operating system.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Why is no control so contested? As with any trade off, the answer is: it depends. Less control also means less responsibility, so is less control always a bad thing? If you are deploying simple, stateless, non-CPU intensive applications then a serverless architecture has a huge advantage: agility. Templated and reusable pieces let us spin up experimental apps very quickly without the responsibility of managing and patching any infrastructure at all.`&lt;/p&gt;

&lt;h2&gt;
  
  
  Microservices vs Serverless
&lt;/h2&gt;

&lt;p&gt;A big difference between microservices and serverless is their execution lifetime. A microservice is expected to be always available while serverless functions come alive when needed, execute, and terminate.&lt;/p&gt;

&lt;p&gt;IaaC — Infra as as service is like renting the land and parking an RV on it to live in it. You own the RV and everything else in it to make it habitable.&lt;/p&gt;

&lt;p&gt;PaaC — Platform as a service is like leasing a house and living in it. The owner takes care of the house and its improvements while the renter makes it livable.&lt;/p&gt;

&lt;p&gt;FaaC — Function as a service is like staying in a hotel room whenever you need a place to stay.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reasons to use Serverless
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Only pay for the time it is executing.&lt;/li&gt;
&lt;li&gt;Our service doesn’t have to be up and running all the time.&lt;/li&gt;
&lt;li&gt;Increases development speed. Easy to build and deploy.&lt;/li&gt;
&lt;li&gt;Automatic scaling.&lt;/li&gt;
&lt;li&gt;Platform agnostic. Just build the business logic and move it to any platform provider.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Reasons not to use Serverless
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Decomposing the application into small functions might not always be the best solution.&lt;/li&gt;
&lt;li&gt;If there is a state that needs to be exchanged between functions, inter-function communication and some expensive plumbing is necessary.&lt;/li&gt;
&lt;li&gt;Complex error handling can be difficult. More operational overhead when maintaining 100s of functions.&lt;/li&gt;
&lt;li&gt;Using native libraries is difficult and not recommended.&lt;/li&gt;
&lt;li&gt;Memory, CPU and network are limited to what is provided.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When to use Serverless?
&lt;/h2&gt;

&lt;p&gt;If we have a simple, stateless, non CPU intensive applications are perfect for serveless. AWS recommends using lambdas for the following use cases&lt;/p&gt;

&lt;h3&gt;
  
  
  Data processing solutions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;File processing&lt;/li&gt;
&lt;li&gt;Stream processing&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Backends
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;IOT&lt;/li&gt;
&lt;li&gt;Mobile&lt;/li&gt;
&lt;li&gt;Web&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Serverless web app architecture
&lt;/h2&gt;

&lt;p&gt;My search for reference architectures on deploying a simple web application using Serverless turned up the architecture below&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesns3zt58x3vt271lp6c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesns3zt58x3vt271lp6c.png" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The front end:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A storage service to keep all the web assets (html, css, js, images).&lt;/li&gt;
&lt;li&gt;A distribution system to manage the caching and availability of the site.&lt;/li&gt;
&lt;li&gt;A DNS service to map custom domain to the cloud distribution system.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Serverless back end:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;API Gateway to expose API.&lt;/li&gt;
&lt;li&gt;Serverless functions to execute the business logic.&lt;/li&gt;
&lt;li&gt;Database to persist state. — Database can be SQL or document/key-value database. The selection depends on the way you interact with the data in the store. If the way we retrieve the information is always going to be the same, a document database like dynamodb works great. Watch Martin Fowler’s talk on noSQL to get an understanding.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This three tier architecture is provider agnostic and can be used in Google Cloud Platform or Azure Cloud Services or IBM Open Whisk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless architecture in Google Cloud Platform
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzu0t66euy4gyt2six6zq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzu0t66euy4gyt2six6zq.png" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Which service provider to chose?
&lt;/h2&gt;

&lt;p&gt;Most serverless providers have similar solutions and functionality. The solution that works best will depend on what you, your team, and organization know and requirements of your application. If you have no constraints, I’d recommended getting started in AWS.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;AWS Lambda&lt;/th&gt;
&lt;th&gt;Google Cloud Functions&lt;/th&gt;
&lt;th&gt;Azure Functions&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Pricing&lt;/td&gt;
&lt;td&gt;1M/month free&lt;/td&gt;
&lt;td&gt;2M/month free&lt;/td&gt;
&lt;td&gt;1M/month free&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Languages&lt;/td&gt;
&lt;td&gt;Node.js, Python, Java, .NET&lt;/td&gt;
&lt;td&gt;JavaScript (many in beta)&lt;/td&gt;
&lt;td&gt;Node.js, Python, PHP, C#, F#&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Triggers&lt;/td&gt;
&lt;td&gt;API, S3, DynamoDB&lt;/td&gt;
&lt;td&gt;HTTP, Any GCF services (firebase, analytics, Pub/Sub, Storage)&lt;/td&gt;
&lt;td&gt;API, Cron, Azure Events, Azure&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Max execution time&lt;/td&gt;
&lt;td&gt;15 mins&lt;/td&gt;
&lt;td&gt;1 min - 9 mins(upgrades)&lt;/td&gt;
&lt;td&gt;5 mins - 10 mins(upgrades)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Concurrency&lt;/td&gt;
&lt;td&gt;1000 executions&lt;/td&gt;
&lt;td&gt;Unlimited for HTTP.             1000 executions&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What if we need more of an open source solution
&lt;/h2&gt;

&lt;p&gt;You are covered. There are many open source Serverless solutions out there!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="http://open.iron.io/" rel="noopener noreferrer"&gt;IronFunctions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fnproject.io/" rel="noopener noreferrer"&gt;Fn Project&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://webtask.io/docs/101" rel="noopener noreferrer"&gt;Webtask&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/nuclio/nuclio" rel="noopener noreferrer"&gt;Nuclio&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openfaas/faas" rel="noopener noreferrer"&gt;Openfaas&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rohit Akiwatkar wrote a comprehensive article about the Serverless &lt;a href="https://hackernoon.com/serverless-and-open-source-where-do-we-stand-today-dff8aec67026" rel="noopener noreferrer"&gt;open source solutions.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://d1.awsstatic.com/whitepapers/serverless-architectures-with-aws-lambda.pdf" rel="noopener noreferrer"&gt;Serverless Architecture with AWS Lambda&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://d0.awsstatic.com/whitepapers/AWS_Serverless_Multi-Tier_Archiectures.pdf" rel="noopener noreferrer"&gt;AWS multi-tier architecture&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.altexsoft.com/blog/cloud/comparing-serverless-architecture-providers-aws-azure-google-ibm-and-other-faas-vendors/" rel="noopener noreferrer"&gt;Comparing serverless providers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://hackernoon.com/serverless-and-open-source-where-do-we-stand-today-dff8aec67026" rel="noopener noreferrer"&gt;Serverless and open source- Where do we stand today&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next up: &lt;a href="https://dev.to/yloganathan/understanding-aws-lambda-4ia4"&gt;Understanding AWS Lambda&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>faas</category>
      <category>aws</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Understanding AWS Lambda</title>
      <dc:creator>Yuva</dc:creator>
      <pubDate>Tue, 21 May 2019 16:17:39 +0000</pubDate>
      <link>https://forem.com/ypeavler/understanding-aws-lambda-4ia4</link>
      <guid>https://forem.com/ypeavler/understanding-aws-lambda-4ia4</guid>
      <description>&lt;p&gt;At Peaksware, we use AWS solutions for our serverless needs. Understanding Lambda functions involves&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;How AWS scales up/down and picks up code changes?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Understanding AWS Lambda companions like Layers and Step functions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Coding best practices when writing lambda functions.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Benchmarking information in this section is summarized from a detailed paper written by Wang Liang and colleagues — Peeking Behind the Curtains of Serverless Platforms&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling up Lambda
&lt;/h2&gt;

&lt;p&gt;When a request comes in, AWS creates a container on top of a VM in one of AWS’s hosts. The container has the runtime and the function code. The function is called and lambda waits for the execution to finish. The execution stops on&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Successful completion of the function code&lt;/li&gt;
&lt;li&gt;Exception in the function code&lt;/li&gt;
&lt;li&gt;Execution timeout&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As more requests comes in, AWS scheduler could use the same container, spin up new containers in the same VM or add more hosts+VMs. AWS handles concurrent requests by initializing up to 1000 concurrent containers. The containers are packed tightly within a VM to optimize for the VM’s memory utilization. Even two different functions from the same AWS account can potentially share the same VM.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cold start
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A cold start may involve launching a new container, setting up the runtime environment, and deploying a function, which will take more time to handle a request than reusing an existing container.&lt;/li&gt;
&lt;li&gt;Cold start latency decreases as the allocated function memory increases. One possible explanation is that AWS allocates CPU power proportionally to the memory size; with more CPU power, environment set up becomes faster.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Scaling down Lambda
&lt;/h2&gt;

&lt;p&gt;AWS will keep the container idle for arbitrary amount of time to respond to the requests faster. AWS will shut down half of the idle instances of a function approximately every 300 seconds until there are two or three instances left, and eventually shut down the remaining instances after 27 minutes. A virtual machine can stay up for 9hrs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Updating Lambda code
&lt;/h2&gt;

&lt;p&gt;There is a small chance that incoming requests could be handled by an old version of the function. The scheduler is not atomic, if you update a function while there are 50 or more concurrent requests hitting the function, 3.8% of instances might run an inconsistent version. 6 seconds seem to be the ideal time between function update and requests to avoid running older version of the function.&lt;/p&gt;

&lt;p&gt;You can also prevent this by using blue-green deployment methods using code deploy instead of updating the function directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda Layers
&lt;/h2&gt;

&lt;p&gt;Lambda functions are executed in containers and lambda layers is a logical extension of existing lambda functionality.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The code size (including all the libs used by the function code) should be under 50MB.&lt;/li&gt;
&lt;li&gt;Layers are a nice way to package the dependencies separately so the deployment package size is reduced.&lt;/li&gt;
&lt;li&gt;Layers are useful to share common code across multiple functions.&lt;/li&gt;
&lt;li&gt;A function can have up to 5 layers and have a max code size of 250MB.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AWS Step functions
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;AWS step functions allows us to create a state machine by chaining together Lambda functions.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In distributed microservice based architecture world, coordination between microservices is done using a pub-sub or event stream model. AWS has taken a stab at abstracting the communication between loosely dependent functions using step functions. Step functions can be seen as orchestrator of functions that helps create and visualize the entire workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use cases for step functions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;ETL data processing&lt;/li&gt;
&lt;li&gt;Move few tasks from a monolith to serverless.&lt;/li&gt;
&lt;li&gt;Orchestrate Lambda functions into a Service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/step-functions/use-cases/" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; provides more use cases and examples.&lt;/p&gt;

&lt;h2&gt;
  
  
  Writing code for Lambda Functions
&lt;/h2&gt;

&lt;p&gt;Even though the blast radius is limited with a Serverless Function, all the engineering best practices still apply with some minor implementation changes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Logging — Use the provided logging mechanism (CloudWatch) and take the log stream to a central location using ELK stack for further processing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Versioning — AWS provides the option to version the function code. Versions are immutable and provides the opportunity to rollback the lambda code if needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitoring — Use the tools like DeadLetterQueue, SNS topics and CloudWatch alarms to get notifications on failure. Monitor 5XX, 4XX on gateway. Monitor Throttling, Errors and ConcurrentExecutions on Lambda.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security — Create a separate role for lambda execution and provide only the needed privileges. Keep lambda behind a VPC if the lambda has access to any private resources.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Are we are ready to put all this knowledge to use? Follow along to the next chapter.&lt;/p&gt;

&lt;p&gt;Next: &lt;a href="https://dev.to/yloganathan/deploying-a-web-app-using-lambda-api-gateway-dynamodb-and-s3-with-serverless-framework-4b1b"&gt;Deploying a web app using Lambda, API Gateway, DynamoDB and S3 with Serverless Framework&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>learning</category>
    </item>
    <item>
      <title>Deploying a web app using Lambda, API Gateway, DynamoDB and S3 with Serverless Framework</title>
      <dc:creator>Yuva</dc:creator>
      <pubDate>Tue, 21 May 2019 16:17:06 +0000</pubDate>
      <link>https://forem.com/ypeavler/deploying-a-web-app-using-lambda-api-gateway-dynamodb-and-s3-with-serverless-framework-4b1b</link>
      <guid>https://forem.com/ypeavler/deploying-a-web-app-using-lambda-api-gateway-dynamodb-and-s3-with-serverless-framework-4b1b</guid>
      <description>&lt;h2&gt;
  
  
  Step0: Understand the architecture
&lt;/h2&gt;

&lt;p&gt;We are going to use the simplest architecture today to create and deploy a web app.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7onwakon3ucmg1dy8i1r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7onwakon3ucmg1dy8i1r.png" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step1: Write the app
&lt;/h2&gt;

&lt;p&gt;The app we are creating is a simple idea-board app where anyone can create an idea, add comments, upvote or delete an idea. &lt;a href="https://github.com/Yloganathan/idea-board/" rel="noopener noreferrer"&gt;Find the code for this app in git.&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47150h0k9yagw86ueft7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47150h0k9yagw86ueft7.png" width="800" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;React frontend — Jennifer Peavler created a simple react app.&lt;/li&gt;
&lt;li&gt;Backend — A serverless python crud handler.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Serverless backend is written differently compared to traditional server backend.&lt;/p&gt;

&lt;p&gt;Lambda needs an entry point and the entry point is a function that takes event, context as arguments.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Step2: Select a deployment method
&lt;/h2&gt;

&lt;p&gt;Principle: Deploy and manage both resources and application code using a version control.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;AWS — provides Code Deploy and Code Pipeline to deploy a serverless application using Cloud Formation templates. The templates are used to create stacks that in-turn will manage all the resources specified.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrlmpfgr9s3e4d9wheaa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrlmpfgr9s3e4d9wheaa.png" width="573" height="586"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://serverless.com/" rel="noopener noreferrer"&gt;Serverless Framework&lt;/a&gt; — provides abstraction over AWS cloud-formation templates. Create a deployment yml file that will generate cloud-formation templates to create stacks that in-turn will manage all the resources specified.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3z1et58l03oa422bmia.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3z1et58l03oa422bmia.png" width="523" height="531"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I picked Serverless Framework to deploy our app because&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minimum configuration — very easy to manage multiple lambda functions, layers and other related resources.&lt;/li&gt;
&lt;li&gt;Cloud agnostic — Can be used to deploy to multiple cloud service providers.&lt;/li&gt;
&lt;li&gt;Widely adopted — Has plenty of plugin support most use-cases&lt;/li&gt;
&lt;li&gt;Support for local development, stages and rollback&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step3: Write the deployment yml
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Start by creating a &lt;code&gt;serverless.yml&lt;/code&gt; file and specifying provider requirements.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service: idea-app-api
provider:
  name: aws
  runtime: python3.7
  memorySize: 512
 timeout: 30
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Specify the Lambda function and its triggers. For our app, we have one handler for five types of HTTP triggers.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;functions:
idea-crud:
handler:  ideas.handler
events:
  - http:
      path: ideas
      method: post
      cors: true
  - http:
      path: ideas/{id}
      method: patch
      cors: true
  - http:
      path: ideas/{id}
      method: get
      cors: true
  - http:
      path: ideas/{id}
      method: delete
      cors: true
  - http:
      path: ideas
      method: get
      cors: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Serverless Framwork allows us to specify resources that we can write directly in AWS cloud formation template. For our example, we will create all the needed resources for the database and S3 bucket using serverless.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;s3-bucket.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Resources:
  TheBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: &amp;lt;WEBSITE BUCKET NAME&amp;gt;
      AccessControl: PublicRead
      WebsiteConfiguration:
        IndexDocument: index.html
        ErrorDocument: error.html
  TheBucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket: !Ref TheBucket
      PolicyDocument:
        Id: MyPolicy
        Version: '2012-10-17'
        Statement:
          - Sid: PublicReadForGetBucketObjects
            Effect: Allow
            Principal: '*'
            Action: 's3:GetObject'
            Resource: !Join
            - ''
            - - 'arn:aws:s3:::'
              - !Ref TheBucket
              - /*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Complete serverless.yml with S3, dynamo and gateway error resources.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;h2&gt;
  
  
  Step4: Deploy!
&lt;/h2&gt;

&lt;p&gt;We have created the resources and the application code. Now it is time to deploy them in AWS. Issue Serverless Framework command to deploy resources. Use &lt;code&gt;--aws-profile&lt;/code&gt; option to specify a profile if you have multiple AWS accounts. It will take few mins when you deploy for the first time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ &amp;gt;&amp;gt; sls deploy --aws-profile playground                                                                                                                                                      
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Creating Stack...
Serverless: Checking Stack create progress...
.....
Serverless: Stack create finished...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service idea-app-api.zip file to S3 (2.5 KB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
..................................................................
Serverless: Stack update finished...
Service Information
service: idea-app-api
stage: dev
region: us-east-1
stack: idea-app-api-dev
resources: 22
api keys:
  None
endpoints:
  POST - https://tm0ndmiyt9.execute-api.us-east-1.amazonaws.com/dev/ideas
  PATCH - https://tm0ndmiyt9.execute-api.us-east-1.amazonaws.com/dev/ideas/{id}
  GET - https://tm0ndmiyt9.execute-api.us-east-1.amazonaws.com/dev/ideas/{id}
  DELETE - https://tm0ndmiyt9.execute-api.us-east-1.amazonaws.com/dev/ideas/{id}
  GET - https://tm0ndmiyt9.execute-api.us-east-1.amazonaws.com/dev/ideas
functions:
  idea-crud: idea-app-api-dev-idea-crud
layers:
  None
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Serverless successfully created all the resources , deployed our code to Lambda, and created API-Gateway endpoints.&lt;/p&gt;

&lt;p&gt;Update the config file (idea-board/app-client/src/config.js) in the frontend to the correct endpoint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export default {
    apiGateway: {
        REGION: "us-east-1",
        URL: "https://tm0ndmiyt9.execute-api.us-east-1.amazonaws.com/dev"
    }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are almost there! Don’t quit now&lt;/p&gt;

&lt;p&gt;Lets push the react app to S3&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ &amp;gt;&amp;gt; npm run build
$ &amp;gt;&amp;gt; aws s3 sync ./build s3://&amp;lt;bucket-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hurray! We did it.&lt;/p&gt;

&lt;p&gt;The site is now public and accessible from &lt;a href="http://bucket-name.s3-website-us-east-1.amazonaws.com/" rel="noopener noreferrer"&gt;http://bucket-name.s3-website-us-east-1.amazonaws.com/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2mmx24f1sehwt7j4vrw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2mmx24f1sehwt7j4vrw.png" width="800" height="315"&gt;&lt;/a&gt;&lt;br&gt;
Thank you for reading! I would love to receive feedback on my work. Please feel free to comment and let me know how to improve my writing.&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>webdev</category>
      <category>tutorial</category>
      <category>aws</category>
    </item>
    <item>
      <title>CircleCI deployment with AWS role assumption</title>
      <dc:creator>Yuva</dc:creator>
      <pubDate>Tue, 21 May 2019 00:49:00 +0000</pubDate>
      <link>https://forem.com/ypeavler/circleci-deployment-with-aws-role-assumption-4g2l</link>
      <guid>https://forem.com/ypeavler/circleci-deployment-with-aws-role-assumption-4g2l</guid>
      <description>&lt;h2&gt;
  
  
  Problem:
&lt;/h2&gt;

&lt;p&gt;We had three separate AWS accounts for dev, staging and production. We have a master account and we used role assumption to access the rest of the accounts for the development stages. We needed CircleCI to deploy to different stages using AWS Role Assumption.&lt;/p&gt;

&lt;p&gt;Note: This article assumes you already have the working knowledge of CircleCI and how to setup CircleCI for your project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution:
&lt;/h2&gt;

&lt;p&gt;We solved the problem using aws config, CircleCI contexts and some yml magic. &lt;/p&gt;

&lt;h3&gt;
  
  
  Step1:
&lt;/h3&gt;

&lt;p&gt;Create a config file &lt;code&gt;aws_config&lt;/code&gt; and add it to the .circleci folder.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   [default]
   region = us-east-1
   output = json

   [profile dev]
   role_arn = arn:aws:iam::&amp;lt;account_id&amp;gt;:role/CircleCi-role
   source_profile = default

   [profile staging]
   role_arn = arn:aws:iam::&amp;lt;account_id&amp;gt;:role/CircleCi-role
   source_profile = default

   [profile production]
   role_arn = arn:aws:iam::&amp;lt;account_id&amp;gt;:role/CircleCi-role
   source_profile = default

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Step2:
&lt;/h3&gt;

&lt;p&gt;CircleCI supports environment variables by context. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Login to CircleCI and go to Settings&lt;/li&gt;
&lt;li&gt;Create a org-global context and add AWS_ACCESS_KEY and AWS_SECRET_KEY.&lt;/li&gt;
&lt;li&gt;Create three contexts for each stage and add AWS_PROFILEenv variable. Set the variable to 'dev', 'staging' and 'production' in respective contexts.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step3:
&lt;/h3&gt;

&lt;p&gt;Add a step in CircleCI config.yml to setup the credentials and config file under the org-default context. You only need to do this once. The .aws folder could be persisted and shared across jobs.&lt;/p&gt;

&lt;p&gt;Note: I am using python executor and pipenv has already installed awscli as a dependency.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     - run:
          name: AWS configure
          command: |
            pipenv run  aws configure set aws_access_key_id $AWS_ACCESS_KEY
            pipenv run  aws configure set aws_secret_access_key $AWS_SECRET_KEY
            cp .circleci/aws_config ~/.aws/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This step creates a credentials file with the key_id and access_key for the default profile. &lt;br&gt;
   The config file in the previous step uses the profile from the credentials file.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step4:
&lt;/h3&gt;

&lt;p&gt;Pass profile name in parameter to aws cli commands in your CircleCI deployment.&lt;br&gt;
&lt;br&gt;
&lt;code&gt;aws s3 sync ./local s3://bucket --profile dev&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;AWSCLI commands can also read from the env variable directly&lt;br&gt;
&lt;br&gt;
&lt;code&gt;aws s3 sync ./local s3://bucket&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;In case, you have a custom script or a Serverless Framework, you can pass the profile.&lt;br&gt;
&lt;br&gt;
&lt;code&gt;sls deploy --aws-profile $AWS_PROFILE&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;h2&gt;
  
  
  Complete example:
&lt;/h2&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Recommended reading: &lt;a href="https://dev.to/yloganathan/aws-cli-using-role-assumption-and-mfa-1871"&gt;Set up AWSCLI using RoleAssumption and MFA.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>tutorial</category>
      <category>circleci</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Visual Studio Code set up to improve developer productivity</title>
      <dc:creator>Yuva</dc:creator>
      <pubDate>Mon, 20 May 2019 20:49:24 +0000</pubDate>
      <link>https://forem.com/ypeavler/vs-code-setup-to-improve-developer-productivity-3die</link>
      <guid>https://forem.com/ypeavler/vs-code-setup-to-improve-developer-productivity-3die</guid>
      <description>&lt;p&gt;VS code has become the go to code editor lately. I configured VSCode to write code, write articles, access database, do code reviews and communicate with the team. The goal is to keep the movement from one app to the other to a minimum.&lt;/p&gt;

&lt;p&gt;I am sharing my configuration here and I hope to learn more from others.&lt;/p&gt;

&lt;h1&gt;
  
  
  Extensions
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://marketplace.visualstudio.com/items?itemName=eamodio.gitlens" rel="noopener noreferrer"&gt;Git lens&lt;/a&gt; - Pick a file in source control and view every version and changes in each version directly in the editor.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://marketplace.visualstudio.com/items?itemName=PeterJausovec.vscode-docker" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; - Start, stop containers that run your code directly from editor.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://marketplace.visualstudio.com/items?itemName=GitHub.vscode-pull-request-github" rel="noopener noreferrer"&gt;Git PR&lt;/a&gt; - Ever wanted to click through the code that you are reviewing? May be wanted to see all the references to a method in the code? Now, you can browse the code being reviewed in the editor with git PR.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://marketplace.visualstudio.com/items?itemName=karigari.chat" rel="noopener noreferrer"&gt;Team Chat&lt;/a&gt; - Bring slack messages to vscode so you don't have to see another screen or app.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://marketplace.visualstudio.com/items?itemName=yzhang.markdown-all-in-one" rel="noopener noreferrer"&gt;Markdown All in One&lt;/a&gt; - Write your documents or articles in markdown directly in the code editor.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://marketplace.visualstudio.com/items?itemName=mushan.vscode-paste-image" rel="noopener noreferrer"&gt;Paste Image&lt;/a&gt; -- Paste images directly to markdown files.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://marketplace.visualstudio.com/items?itemName=ckolkman.vscode-postgres" rel="noopener noreferrer"&gt;PostgresSQL&lt;/a&gt; -- Create connections to dev/uat/prod databases and execute any SQL command.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://marketplace.visualstudio.com/items?itemName=VisualStudioExptTeam.vscodeintellicode" rel="noopener noreferrer"&gt;Visual Studio IntelliCode&lt;/a&gt; - -Autocomplete for Python/JS/TS/Java code.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://marketplace.visualstudio.com/items?itemName=SonarSource.sonarlint-vscode" rel="noopener noreferrer"&gt;SonarLint&lt;/a&gt; --  Best multi language linter.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://marketplace.visualstudio.com/items?itemName=streetsidesoftware.code-spell-checker" rel="noopener noreferrer"&gt;Code Spell Checker&lt;/a&gt; - Tells you to use meaningful words in code and also useful in spellchecking the documents you write.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  settings.json
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "workbench.colorTheme": "Solarized Dark",
    "workbench.sideBar.location": "left",
    "workbench.settings.enableNaturalLanguageSearch": false,
    "workbench.statusBar.feedback.visible": false,
    "window.zoomLevel": 1,
    "explorer.confirmDragAndDrop": false,
    "explorer.sortOrder": "filesFirst",
    "explorer.openEditors.visible": 0,
    "breadcrumbs.enabled": true,
    "files.autoSave": "afterDelay",
    "files.insertFinalNewline": true,
    "files.trimTrailingWhitespace": true,
    "editor.suggestSelection": "first",
    "editor.minimap.enabled": false,
    "editor.cursorSmoothCaretAnimation": true,
    "editor.formatOnPaste": true,
    "editor.formatOnSave": true,
    "editor.cursorBlinking": "phase",
    "editor.smoothScrolling": true,
    "editor.renderWhitespace": "all",
    "git.autofetch": true,
    "git.postCommitCommand": "push",
    "git.alwaysShowStagedChangesResourceGroup": true,
    "gitlens.codeLens.authors.enabled": false,
        "gitlens.advanced.messages": {
        "suppressFileNotUnderSourceControlWarning": true
    },
    "gitlens.views.repositories.files.layout": "list",
    "cSpell.allowCompoundWords": true,
    "cSpell.userWords": [
        "oauth",
        "postgres",
        "repo",
        "venmo"
    ],
    "cSpell.enabledLanguageIds": [
        "markdown",
        "plaintext",
        "text"
    ]
    "telemetry.enableCrashReporter": false,
    "telemetry.enableTelemetry": false,
    "githubPullRequests.telemetry.enabled": false,
    "eslint.autoFixOnSave": true,
    "javascript.updateImportsOnFileMove.enabled": "always",
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>vscode</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Do not abuse the assert</title>
      <dc:creator>Yuva</dc:creator>
      <pubDate>Mon, 13 May 2019 20:37:25 +0000</pubDate>
      <link>https://forem.com/yloganathan/do-not-abuse-the-assert-1cfm</link>
      <guid>https://forem.com/yloganathan/do-not-abuse-the-assert-1cfm</guid>
      <description>&lt;p&gt;Asserts are for programmers to find bugs or situations that should never happen. The program should execute normally if all the asserts are removed. Assertive programming section in Pragmatic Programmer states&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Whenever you find yourself thinking “but of course that could never happen,” add code to check it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It is very easy to take that advise and add asserts to handle every negative case. Most languages provide a convenient way to do this check in debug code, which is typically turned off in production deployments after testing. Asserts are not meant to be used for normal checking (i.e. error handling)&lt;/p&gt;

&lt;h2&gt;
  
  
  Don’t use asserts for normal error handling
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Null checks&lt;/strong&gt; — Consider a python example where we are going to access a dict. &lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def create_auth_header(oauth):
    assert oauth
    return {"Authorization": "Bearer " + oauth['access_token']}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Assert here does not provide anything additional that the language errors do not provide. TypeError or KeyError (in python) provides more information than AssertionError. Very similar to try and catch , asserts littered all over the code distract the programmer that is reading the code from the business logic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Validating user input&lt;/strong&gt; — A good program always validates user input, but this should never be done with assertions. Exceptions are there for this reason. In Python, all asserts can be ignored when running the program in optimized mode. All the user input validations could be bypassed accidentally by another developer.&lt;/p&gt;

&lt;h2&gt;
  
  
  When do I use asserts then?
&lt;/h2&gt;

&lt;p&gt;Use asserts as a tool to alert you when you think the cause and effect are separated by a lot of code. Ever been in situation where a bug in one piece of code shows up as odd behavior in a completely different module making debugging a nightmare? We have all been there. We try to avoid being in that situation by asking a lot of ‘what if’ questions when we write code. What if the input is null? What if the key that I am looking for is not available? What if the world ended? If the distance between the cause i.e the input is null and the effect i.e system exits with TypeError is not within the function or method or class itself, there is a need for an assert.&lt;/p&gt;

&lt;p&gt;Let's take a look at an example where assert might save us a lot of time. This example is taken from Python wiki . Please check the other examples there as well. They are very helpful in understanding asserts. &lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class PrintQueueList:
   ...
     def add(self, new_queue):
       assert new_queue not in self._list, \
          "%r is already in %r" % (self, new_queue)
       assert isinstance(new_queue, PrintQueue), \
          "%r is not a print queue" % new_queue
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Here assert is used to check duplicates in a list. It is good to fail fast in this case and sleep well at night knowing that there cannot be any duplicates in this list especially the effect of a duplicate might only be noticeable further down the execution path and the symptoms might not directly point to the root cause.&lt;/p&gt;

&lt;p&gt;In summary,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Don’t use asserts to check inputs if the language can fail fast for you&lt;/li&gt;
&lt;li&gt;Don’t use asserts for user input validation&lt;/li&gt;
&lt;li&gt;Don’t turn off asserts in production&lt;/li&gt;
&lt;li&gt;Do use asserts sparingly when the distance between cause and effect is not immediate and hence reducing the side effects&lt;/li&gt;
&lt;li&gt;Do use asserts to prevent your system from going into an inconsistent or non-performant state due to programmer errors. Assertions are great for runtime (i.e external cause errors). “We didn’t test this case because we never thought we could get here” i.e. things that “should never happen”&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>bestpractice</category>
      <category>python</category>
    </item>
    <item>
      <title>How do you evaluate a software framework?</title>
      <dc:creator>Yuva</dc:creator>
      <pubDate>Mon, 13 May 2019 19:38:02 +0000</pubDate>
      <link>https://forem.com/ypeavler/how-do-you-evaluate-a-software-framework-7bi</link>
      <guid>https://forem.com/ypeavler/how-do-you-evaluate-a-software-framework-7bi</guid>
      <description>&lt;p&gt;We do a lot of reading and testing to compare multiple frameworks available in the market when you are starting a project. I wanted to share my methods and hope to learn what other software devs do.&lt;/p&gt;

&lt;h2&gt;
  
  
  My methodology
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Time box general research and list two to three frameworks/libraries that I want to try.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Try to solve the most complex problem in the requirement with the items selected in step 1. Timebox this step as well. How much you can solve with a same amount of time is also a good indicator of how easy it is to use a selected software.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fill out the rubric for each and compare them after giving myself a day break after the research.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Framework Evaluation Rubric
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;What is the fundamental problem it is solving?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How intrusive/opinionated is it?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can I abstract it away so that my code can be protected from framework specific leakage?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What is the cost of changing my mind later?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What is the philosophy of extensions? How easy or difficult it is to write extensions?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Maturity, community, activeness and open bugs&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Do you have a rubric to compare the characteristics that are important to you?&lt;/p&gt;

&lt;p&gt;Thanks &lt;a href="https://github.com/codespider" rel="noopener noreferrer"&gt;Karthikkannan&lt;/a&gt; for giving me the outline.&lt;/p&gt;

</description>
      <category>discuss</category>
    </item>
    <item>
      <title>Create a static site with Python, MkDocs, and S3</title>
      <dc:creator>Yuva</dc:creator>
      <pubDate>Sun, 12 May 2019 18:32:03 +0000</pubDate>
      <link>https://forem.com/ypeavler/how-to-create-a-static-site-with-mk-docs-and-s3-38m8</link>
      <guid>https://forem.com/ypeavler/how-to-create-a-static-site-with-mk-docs-and-s3-38m8</guid>
      <description>&lt;p&gt;We needed a lightweight solution to provide our beta customers with documentation, upcoming features, and FAQs. We wanted to use markdown for developer happiness and wanted something simple and easy to use. We settled on using Python MkDocs to generate a static site that would be hosted in AWS S3.&lt;/p&gt;

&lt;p&gt;We decided to add customer documentation as part of the same git repo as the code and started looking for solutions to create a static site from markdown. I picked &lt;a href="https://www.mkdocs.org/" rel="noopener noreferrer"&gt;MkDocs&lt;/a&gt; since it was very light weight and easy to get started on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a static site from markdowns
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Install MkDocs. &lt;code&gt;pip install mkdocs&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create the folder structure needed by MkDocs. Below is our sample site:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; documents
     -- mkdocs.yml
     -- docs
       -- index.md
       -- faq.md
       -- images/
       -- stylesheets/
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install material theme for mkdocs &lt;code&gt;pip install mkdocs-material&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Setup mkdocs by updating mkdocs.yml&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;site_name: TrainingPeaks Beta
site_description: How to documents for Beta users.
theme:
    name: 'material'
    favicon: 'images/favicon.ico'
    extra_css:
       - 'stylesheets/extra.css'
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;We added a few minor modifications to the material theme:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up the favicon.&lt;/li&gt;
&lt;li&gt;Provide an additional stylesheet that has some minor overrides.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Serve site locally &lt;code&gt;mkdocs serve&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Deploy MkDocs site to S3
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Setup S3
&lt;/h3&gt;

&lt;p&gt;I recommend using cloudformation template to create a public bucket to host the documents. &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/quickref-s3.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; on S3 site templates.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a yaml file&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: 2010-09-09
Description: Resources to host documentation.

Parameters:
SiteName:
    Description: Site name
    Type: String
    Default: tp-beta-learning

Resources:
S3Bucket:
    Type: AWS::S3::Bucket
    Properties:
        BucketName: !Ref SiteName
        AccessControl: PublicRead
        WebsiteConfiguration:
            IndexDocument: index.html
            ErrorDocument: error.html

BucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
    PolicyDocument:
        Id: MyPolicy
        Version: 2012-10-17
        Statement:
          - Sid: PublicReadForGetBucketObjects
            Effect: Allow
            Principal: '*'
            Action: 's3:GetObject'
            Resource: !Join
            - ''
            - - 'arn:aws:s3:::'
                - !Ref S3Bucket
                - /*
    Bucket: !Ref S3Bucket

Outputs:
WebsiteURL:
    Value: !GetAtt
        - S3Bucket
        - WebsiteURL
    Description: URL for website hosted on S3

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Deploy the cloud formation using aws cli.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws cloudformation create-stack --template-body file://help-site.yaml --stack-name Help-Site&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Verify that the stack creation is successful. We only need to do this step once.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Deploy Site to S3
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;In the local directory, run &lt;code&gt;mkdocs build&lt;/code&gt;. This command will create a &lt;code&gt;site&lt;/code&gt; folder with html files.&lt;/li&gt;
&lt;li&gt;Deploy site to s3 &lt;code&gt;aws s3 sync ./site s3://tp-beta-learning --recursive&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The site is up and running in the url &lt;code&gt;http://&amp;lt;bucket-name&amp;gt;.s3-website-&amp;lt;region&amp;gt;.amazonaws.com.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next step is to use a custom domain for the site.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>mkdocs</category>
      <category>tutorial</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Docker and Docker Compose</title>
      <dc:creator>Yuva</dc:creator>
      <pubDate>Sun, 12 May 2019 15:23:35 +0000</pubDate>
      <link>https://forem.com/yloganathan/docker-and-docker-compose-415i</link>
      <guid>https://forem.com/yloganathan/docker-and-docker-compose-415i</guid>
      <description>&lt;p&gt;In order to understand docker, we have to go back in time and study the evolution of containers and how we got to where we are!&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a container?
&lt;/h2&gt;

&lt;p&gt;From the docker site&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Containers are a way to package software in a format that can run isolated on a shared operating system. Unlike VMs, containers do not bundle a full operating system - only libraries and settings required to make the software work are needed. This makes for efficient, lightweight, self-contained systems and guarantees that software will always run the same, regardless of where it’s deployed.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Lets unpack it a bit&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Back in the late nineties, VMWare introdcued the concept of running multiple OS in the same hardware.&lt;/li&gt;
&lt;li&gt;In the late 2000s, kernel level namespacing was introduced that allows shared global resources like network and disk to be isolated by namespaces.&lt;/li&gt;
&lt;li&gt;In early 2010s, Containerization was born and it took virtualization at the OS level and added shared libs/bin as well. This also means we cannot run two containers that are dependent on different operating systems in the same host unless we are using a VM.&lt;/li&gt;
&lt;li&gt;Namespaces are the true magic behind containers. Principles are from linux containers and docker implemented its own OCI runtime called &lt;a href="https://github.com/opencontainers/runc"&gt;runc&lt;/a&gt; 

&lt;code&gt;
Virtual Machines are virtualization at the hardware level
Containers are virtualization at the OS/Software level
&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Advantages of using containers
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Speed&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Execution speed - Because containers use underlying host os,we get speeds as close a process natively running on the host os.&lt;/li&gt;
&lt;li&gt;Startup speed - Containers can be started in less than a second. They are very modular and can share the underlying libs/bins when needed along with host os.&lt;/li&gt;
&lt;li&gt;Operational speed - Containers enable faster iterations of application. There is less overhead in creating a container with new code changes and move it through the pipeline to production.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Consistency&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build an image once and use it any where. The same image that is used to run the tests is used in production. This avoids the works in my machine problems.&lt;/li&gt;
&lt;li&gt;Not just in production. Containers helps in running tests consistently. Ever had a scenario where all tests passed in your machine but the CI failed your tests?.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Scalability&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We can specify exactly how much resources a single container can consume(CPU and memory). By understanding the available resources, containers can be packed densely to minimize wastage of CPU and memory. Scale containers within one host before scaling the instance.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Flexibility&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Containers are portable. Well to an extent (as long as the host is running some form of linux or linux vm).&lt;/li&gt;
&lt;li&gt;You can move a container from one machine to another very quickly. Imagine something went wrong while patching a security hole in the host OS, we simply move the container to a different host and resume service very quickly.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Enter Docker
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Docker as a company
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;In 2013, Docker created the first container platform.&lt;/li&gt;
&lt;li&gt;In 2015, Docker created the &lt;a href="opencontainers.org"&gt;Open Container Initiative&lt;/a&gt; - - governance structure around container image and runtime specification. They also donated the first runtime to OCI. The current runtime used by docker and many other platforms is runC - written in golang.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Docker runtime/daemon/engine
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Docker Engine is built for linux.&lt;/li&gt;
&lt;li&gt;Docker for Mac uses HyperKit to run a lightweight Alpine Linux virtual machine.&lt;/li&gt;
&lt;li&gt;Docker teamed up microsoft to create Windows OCI runtime  available in Windows 10 or Windows server 2016.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Docker Cli
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Docker cli commands look very similar to git commands. Many of them share the context as well.

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;git pull&lt;/code&gt; will get source from origin to local&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docker pull &amp;lt;image&amp;gt;&lt;/code&gt; will get the docker image from remote registry to local&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;Docker follows a client server model so the cli can connect to local docker server or the remote server&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Docker Images
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization. For example, we may build an image which is based on the ubuntu image, but installs the Apache web server and your application, as well as the configuration details needed to make your application run.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We need a &lt;strong&gt;Dockerfile&lt;/strong&gt; to create an image. Let's look at an example. This is an image to run a python flask application using gunicorn&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.7.3-stretch

ADD . /code
WORKDIR /code

COPY Pipfile Pipfile.lock /code/

RUN apt-get update

RUN apt-get install postgresql postgresql-client --yes &amp;amp;&amp;amp; \
    apt-get -qy install netcat &amp;amp;&amp;amp; \
    pip install --upgrade pip setuptools wheel &amp;amp;&amp;amp;\
    pip install --upgrade pipenv &amp;amp;&amp;amp; \
    pipenv install --dev --system --ignore-pipfile

CMD ["/usr/local/bin/gunicorn", "--config", "wsgi.config", "coach_desk:create_app('development')"]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Images are a collection of immutable layers.
&lt;/h3&gt;

&lt;p&gt;Each instruction in a Dockerfile above creates a layer in the image. When we change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Images can also be built on top of other images.
&lt;/h3&gt;

&lt;p&gt;The first line in the dockerfile is &lt;code&gt;FROM&lt;/code&gt; which specifies the image that the current image is being built from. Lets look the &lt;code&gt;Dockerfile&lt;/code&gt; that is used to create the python image. This image is built from &lt;code&gt;buildpack-deps:stretch&lt;/code&gt; which provides all the basic tools to support any language.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM buildpack-deps:stretch

# ensure local python is preferred over distribution python
ENV PATH /usr/local/bin:$PATH

# http://bugs.python.org/issue19846
# &amp;gt; At the moment, setting "LANG=C" on a Linux system *fundamentally breaks Python 3*, and that's not OK.
ENV LANG C.UTF-8

# extra dependencies (over what buildpack-deps already includes)
RUN apt-get update &amp;amp;&amp;amp; apt-get install -y --no-install-recommends \❗
        tk-dev \
        uuid-dev \
    &amp;amp;&amp;amp; rm -rf /var/lib/apt/lists/*

ENV GPG_KEY 0D96DF4D4110E5C43FBFB17F2D347EA6AA65421D

ENV PYTHON_VERSION 3.7.3
...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;code&gt;buildpack-deps:stretch&lt;/code&gt; is built from &lt;code&gt;buildpack-deps:stretch-scm&lt;/code&gt; which is built from &lt;code&gt;buildpack-deps:stretch-curl&lt;/code&gt; which is built from &lt;code&gt;debian:stretch&lt;/code&gt; which is built from scratch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM scratch
ADD rootfs.tar.xz /
CMD ["bash"]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If I had 1000 dockerfiles that are all built from &lt;code&gt;python:3.7.3-stretch&lt;/code&gt;, the related layers are not downloaded 1000 times but only once. Same goes with containers, when we run a python container 1000 times, python is installed only once and reused.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker registry
&lt;/h2&gt;

&lt;p&gt;Registry is a place to store all the images.  When we install docker, we have a local registry where all the images we create a stored.&lt;br&gt;
Try &lt;code&gt;docker images&lt;/code&gt; to list all the images currently in your local registry&lt;br&gt;
&lt;a href="https://hub.docker.com/"&gt;&lt;code&gt;docker hub&lt;/code&gt;&lt;/a&gt; is a public repository that has over 100,000 images. That would be the first place to look for pre-built images that we could use directly or use as a base image to build on top.&lt;/p&gt;

&lt;p&gt;We can move the images from local to remote using &lt;code&gt;docker push&lt;/code&gt; and &lt;code&gt;docker pull&lt;/code&gt; commands. The default remote registry is docker hub unless we specify explicitly. At Peaksware, we use Amazon ECR to store our production docker images.&lt;/p&gt;
&lt;h2&gt;
  
  
  Docker Compose
&lt;/h2&gt;

&lt;p&gt;Compose was introduced so we do not have to build and start every container manually. Compose is a tool for defining and running multi-container Docker applications. Compose was initially created for development and testing purpose. Docker with recent releases have made compose yml to be used to create a docker swarm.&lt;/p&gt;
&lt;h3&gt;
  
  
  Using docker compose for a real application
&lt;/h3&gt;

&lt;p&gt;A microservice that our team works on needs the right postgres database with all the migrations and aws cli setup in-order to run the service locally in a developer machine. The service was fairly new and the backend kept evolving and we need a quick way to spin up everything that is needed to get the service up for front end developers that are dependent on this service. Docker compose came in handy, the compose file below would spin up two containers&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Python docker and install all dependencies and start the webserver&lt;/li&gt;
&lt;li&gt;Postgres database&lt;/li&gt;
&lt;/ol&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.7'

services:
  web:
    build:
      context: ..
      dockerfile: df.dev.Dockerfile

    environment:
      - DB_URI=postgres://postgres:postgres@db/idea_box

    command: bash -c  "flask migrate &amp;amp;&amp;amp; flask run -p 5000 -h 0.0.0.0"
    ports:
      - 5000:5000
    links:
      - db
    volumes:
      - ../:/code

  db:
    image: postgres:10.1
    environment:
      POSTGRES_DB: idea_box
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;We can specify the dependencies using &lt;code&gt;links&lt;/code&gt;. The web service container will wait until the db container is up before executing the entry command &lt;code&gt;bash -c  "flask migrate &amp;amp;&amp;amp; flask run -p 5000 -h 0.0.0.0"&lt;/code&gt; which would run the migration and start the flask server.&lt;/p&gt;

&lt;p&gt;If any one wants to run this service, they don't have to install python or flask or postgres instead the developer runs &lt;code&gt;docker-compose -f docker-compose.yml up&lt;/code&gt; and wait for the api to be available at localhost:5000&lt;/p&gt;
&lt;h3&gt;
  
  
  Adding real complexity
&lt;/h3&gt;

&lt;p&gt;This works great when your service is that simple. In reality, we had to add a queue and lambda function to process the queue and send messages to a different service. Unfortunately, We found &lt;a href="https://github.com/localstack/localstack"&gt;&lt;code&gt;localstack&lt;/code&gt;&lt;/a&gt; which emulates AWS services.&lt;/p&gt;

&lt;p&gt;We can spin up SQS instance locally using &lt;a href="https://github.com/localstack/localstack"&gt;localstack&lt;/a&gt; and create a queue using an init shell script that is called via entrypoint in localstack container.&lt;/p&gt;

&lt;p&gt;This still does not represent the complete service. We still need a local lambda function that would read from the queue and push the message to another service. This is where I found the effort it takes to set up the entire service within docker compose out weighed the benefits.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.7'

services:
  web:
    build:
      context: ..
      dockerfile: .df.dev.Dockerfile

    environment:
      - DB_URI=postgres://postgres:postgres@db/coach_desk
      - AWS_ACCESS_KEY_ID=foo
      - AWS_SECRET_ACCESS_KEY=bar
      - AWS_ENDPOINT=http://aws:4576

    command: bash -c  "flask migrate &amp;amp;&amp;amp; flask run -p 5000 -h 0.0.0.0"
    ports:
      - 5000:5000
    links:
      - db
      - aws
    volumes:
      - ../:/code

  db:
    image: postgres:10.1
    environment:
      POSTGRES_DB: coach_desk


  aws:
    image: localstack/localstack
    ports:
      - 4576:4576
      - 8080:8080
    environment:
      - SERVICES=sqs
      - DEBUG=True
    volumes:
      - ./localstack:/docker-entrypoint-initaws.d
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Even with some of the complexities involved in using docker compose for a real service, I would recommend experimenting with it to see if it works for your team. I would love to hear how you/your team use docker for development and testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developing with Docker
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;No painful developer machine setup. With compose, anyone can spin up a service pretty quick without having to install all dependencies that they will never use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Consistent outcome - from dev to prod. Builds are reproducible, reliable, and tested to function exactly as expected on production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Speed up testing - We have tests that need test database and we have the tear-down after each test/group of tests to clean up the database. I am working on ways to run parallel tests with database running in multiple containers for our project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Code reviews can be painless - each dev can attach an image with their code review that the reviewer can quickly spin up without having to interrupt what they are doing and test a different version of the code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Quick Fixes can be quick - When developers find bugs, they can fix them in the development environment and redeploy them to the test environment for testing and validation.When testing is complete, getting the fix to the customer is as simple as pushing the updated image to the production environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>dockercompose</category>
      <category>containers</category>
    </item>
  </channel>
</rss>
