<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Parimal </title>
    <description>The latest articles on Forem by Parimal  (@parimal5).</description>
    <link>https://forem.com/parimal5</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/parimal5"/>
    <language>en</language>
    <item>
      <title>Why Cilium Outperforms AWS VPC CNI: A Deep Dive into Kubernetes Networking</title>
      <dc:creator>Parimal </dc:creator>
      <pubDate>Thu, 14 Aug 2025 09:18:57 +0000</pubDate>
      <link>https://forem.com/parimal5/why-cilium-outperforms-aws-vpc-cni-a-deep-dive-into-kubernetes-networking-nef</link>
      <guid>https://forem.com/parimal5/why-cilium-outperforms-aws-vpc-cni-a-deep-dive-into-kubernetes-networking-nef</guid>
      <description>&lt;p&gt;Running Kubernetes at scale in AWS presents unique networking challenges that can significantly impact your application performance and operational efficiency. While AWS VPC CNI serves as the default networking solution for EKS clusters, it often becomes the bottleneck when dealing with high-scale or dynamic workloads. Enter Cilium - an eBPF-powered CNI that's revolutionizing how we think about Kubernetes networking.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Foundation: Understanding CNI in Kubernetes
&lt;/h2&gt;

&lt;p&gt;Container Network Interface (CNI) plugins serve as the backbone of Kubernetes networking, handling three critical responsibilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;IP Address Management&lt;/strong&gt;: Allocating unique IP addresses to pods&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Configuration&lt;/strong&gt;: Setting up routes and network interfaces for inter-pod communication
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Integration&lt;/strong&gt;: Bridging container networking with the underlying infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a pod initializes, Kubernetes delegates to the CNI plugin with a simple request: "Assign an IP to this pod and ensure it can communicate with the cluster." Without a functional CNI, your pods remain isolated islands with no networking capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS VPC CNI: The Default Choice and Its Limitations
&lt;/h2&gt;

&lt;p&gt;Amazon EKS ships with AWS VPC CNI as the default networking solution, designed to integrate seamlessly with AWS networking primitives.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture Deep Dive
&lt;/h3&gt;

&lt;p&gt;AWS VPC CNI operates on a straightforward principle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each worker node receives a primary Elastic Network Interface (ENI)&lt;/li&gt;
&lt;li&gt;Additional secondary ENIs can be attached based on instance capacity&lt;/li&gt;
&lt;li&gt;Each ENI supports multiple secondary IP addresses&lt;/li&gt;
&lt;li&gt;Pods receive VPC-native IP addresses directly from the subnet pool&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Benefits
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Native VPC Integration&lt;/strong&gt;: Pods become first-class citizens in your VPC, enabling direct communication with other AWS services without additional network hops.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zero Encapsulation Overhead&lt;/strong&gt;: Network packets flow through native AWS routing without additional headers or processing overhead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Group Integration&lt;/strong&gt;: Pods can leverage existing VPC security group policies for network access control.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Performance Bottlenecks
&lt;/h3&gt;

&lt;p&gt;Despite its integration advantages, AWS VPC CNI introduces several scalability constraints:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ENI Management Latency&lt;/strong&gt;: Each ENI attachment requires AWS API calls, introducing latency measured in seconds rather than milliseconds. During rapid scaling events, this becomes a significant bottleneck.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Subnet IP Address Exhaustion&lt;/strong&gt;: Every pod consumes a routable VPC IP address, leading to subnet exhaustion in large clusters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instance-Specific Scaling Limits&lt;/strong&gt;: The maximum number of pods per node is constrained by the ENI and IP limits of your EC2 instance type. For example, an m5.large instance supports only 3 ENIs with 10 IPs each, limiting you to approximately 29 pods per node.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limited Observability&lt;/strong&gt;: Network flow visibility requires additional tooling and configuration, complicating troubleshooting and security auditing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cilium: The eBPF-Powered Alternative
&lt;/h2&gt;

&lt;p&gt;Cilium leverages Extended Berkeley Packet Filter (eBPF) technology to provide high-performance networking with advanced observability and security features baked in.&lt;/p&gt;

&lt;h3&gt;
  
  
  Core Advantages
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Hubble Integration&lt;/strong&gt;: Real-time network flow observability without additional agents or performance overhead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advanced Network Policies&lt;/strong&gt;: Support for Layer 3, 4, and 7 filtering with HTTP-aware rules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service Mesh Without Sidecars&lt;/strong&gt;: Built-in load balancing, encryption, and traffic management without the resource overhead of traditional service mesh proxies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flexible IPAM Options&lt;/strong&gt;: Multiple IP address management modes to suit different architectural requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  IPAM Mode Comparison
&lt;/h3&gt;

&lt;p&gt;Cilium supports multiple IPAM strategies, each optimized for different use cases:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ENI Mode&lt;/strong&gt;: Functions similarly to AWS VPC CNI, using secondary ENI IPs while adding Cilium's observability and policy features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster Pool (Overlay) Mode&lt;/strong&gt;: Manages IP addresses from Cilium-controlled pools, using VXLAN or Geneve encapsulation for pod-to-pod communication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes Mode&lt;/strong&gt;: Delegates IP management to Kubernetes' native IPAM, providing flexibility for custom implementations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance Analysis: Where Cilium Excels
&lt;/h2&gt;

&lt;p&gt;The performance differential between AWS VPC CNI and Cilium becomes most apparent during pod lifecycle operations and scaling events.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS VPC CNI Pod Startup Sequence
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Pod creation request → Kubelet
2. CNI invocation → IP address requirement
3. ENI capacity check → Available secondary IPs
4. ENI attachment (if needed) → AWS API call (2-5 seconds)
5. Secondary IP allocation → AWS API call
6. Network interface configuration → Pod ready
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This sequence introduces significant latency, particularly when ENI limits are reached and new interfaces must be provisioned.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cilium Overlay Mode Startup Sequence
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Pod creation request → Kubelet  
2. CNI invocation → IP address requirement
3. Instant IP allocation → From pre-allocated pool
4. eBPF program configuration → Millisecond-level operation
5. Network interface ready → Pod ready
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The elimination of AWS API dependencies results in pod networking readiness in milliseconds rather than seconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Trade-off Considerations
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Encapsulation Overhead&lt;/strong&gt;: Overlay networking introduces minimal packet processing overhead due to VXLAN/Geneve headers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VPC Integration&lt;/strong&gt;: Pods in overlay mode aren't directly addressable from the VPC, requiring ingress controllers or NodePort services for external access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Policies&lt;/strong&gt;: eBPF-based policy enforcement often outperforms iptables-based alternatives, especially at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Migration Impact
&lt;/h2&gt;

&lt;p&gt;Organizations migrating from AWS VPC CNI to Cilium overlay mode typically report:&lt;/p&gt;

&lt;h3&gt;
  
  
  Before Migration Challenges
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Pod scaling operations taking multiple minutes due to ENI provisioning delays&lt;/li&gt;
&lt;li&gt;Frequent subnet IP address exhaustion requiring subnet expansion or cluster restructuring
&lt;/li&gt;
&lt;li&gt;Complex toolchain requirements for network observability and security policy enforcement&lt;/li&gt;
&lt;li&gt;Difficulty troubleshooting inter-service communication issues&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Post-Migration Improvements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Pod networking readiness reduced to sub-second timeframes&lt;/li&gt;
&lt;li&gt;Elimination of subnet IP address constraints enabling higher pod density&lt;/li&gt;
&lt;li&gt;Unified platform for networking, security, and observability through Cilium and Hubble&lt;/li&gt;
&lt;li&gt;Enhanced debugging capabilities with flow-level visibility&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Decision Framework: Choosing the Right CNI
&lt;/h2&gt;

&lt;p&gt;Your CNI choice should align with your specific requirements and constraints:&lt;/p&gt;

&lt;h3&gt;
  
  
  Choose AWS VPC CNI When:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Regulatory compliance mandates VPC-native pod IP addresses&lt;/li&gt;
&lt;li&gt;Direct pod-to-AWS-service communication is required without additional network hops&lt;/li&gt;
&lt;li&gt;Your workloads are relatively static with predictable scaling patterns&lt;/li&gt;
&lt;li&gt;You have sufficient subnet IP address space allocated&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Choose Cilium ENI Mode When:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;You need VPC-native IPs but want enhanced observability and security features&lt;/li&gt;
&lt;li&gt;Compliance requirements are flexible regarding network encapsulation&lt;/li&gt;
&lt;li&gt;You're planning to implement advanced network policies&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Choose Cilium Overlay Mode When:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Rapid scaling and high pod density are critical requirements&lt;/li&gt;
&lt;li&gt;Subnet IP address management is becoming operationally complex&lt;/li&gt;
&lt;li&gt;You need comprehensive network observability and security policy enforcement&lt;/li&gt;
&lt;li&gt;Your applications can work with ingress-based external connectivity&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implementation Considerations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Migration Strategy
&lt;/h3&gt;

&lt;p&gt;Migrating from AWS VPC CNI to Cilium requires careful planning:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Cluster Preparation&lt;/strong&gt;: Ensure your EKS cluster version supports alternative CNIs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Policy Audit&lt;/strong&gt;: Review existing security groups and translate to Cilium network policies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service Discovery&lt;/strong&gt;: Verify that your service discovery mechanisms work with overlay networking&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring Integration&lt;/strong&gt;: Plan for migrating network monitoring from AWS-native tools to Hubble&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Performance Optimization
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;eBPF Program Efficiency&lt;/strong&gt;: Cilium's eBPF programs are compiled for your specific kernel version, ensuring optimal performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CPU and Memory Usage&lt;/strong&gt;: Cilium typically uses fewer resources than traditional iptables-based CNIs, especially as the number of network policies grows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Throughput&lt;/strong&gt;: While overlay networking introduces minimal overhead, direct benchmarking in your environment is recommended.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Future of Kubernetes Networking
&lt;/h2&gt;

&lt;p&gt;eBPF technology continues evolving rapidly, with new capabilities being added regularly. Cilium's position at the forefront of this evolution means choosing it today provides access to emerging features like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Advanced load balancing algorithms&lt;/strong&gt; without external load balancers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-cluster networking&lt;/strong&gt; with transparent service discovery across clusters
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced security features&lt;/strong&gt; including runtime threat detection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance optimizations&lt;/strong&gt; that leverage new eBPF capabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While AWS VPC CNI remains a solid choice for straightforward, compliance-driven Kubernetes deployments, Cilium offers compelling advantages for organizations prioritizing performance, scalability, and operational simplicity. The combination of eBPF-powered networking, comprehensive observability through Hubble, and flexible IPAM options makes Cilium particularly attractive for dynamic, high-scale workloads.&lt;/p&gt;

&lt;p&gt;The decision ultimately depends on your specific requirements, but as Kubernetes environments grow in complexity and scale, the advanced capabilities provided by Cilium's eBPF foundation position it as the networking solution for the future of container orchestration.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have you experienced ENI limits or scaling challenges with AWS VPC CNI? Share your experiences and questions about Cilium migration in the comments below.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Demystifying mTLS in Kubernetes: Certs, Components, and Cluster Security.</title>
      <dc:creator>Parimal </dc:creator>
      <pubDate>Mon, 04 Aug 2025 08:09:18 +0000</pubDate>
      <link>https://forem.com/parimal5/demystifying-mtls-in-kubernetes-certs-components-and-cluster-security-21fa</link>
      <guid>https://forem.com/parimal5/demystifying-mtls-in-kubernetes-certs-components-and-cluster-security-21fa</guid>
      <description>&lt;h2&gt;
  
  
  🧩 Introduction
&lt;/h2&gt;

&lt;p&gt;When we talk about Kubernetes security, most developers think of Role-Based Access Control (RBAC) or Network Policies. But before these higher-level controls come into play, there's a foundational layer that silently ensures every component in the cluster is speaking to a trusted source — &lt;strong&gt;mutual TLS (mTLS)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In a self-managed Kubernetes cluster, unlike cloud-managed solutions like EKS or GKE, you're responsible for configuring, maintaining, and rotating these certificates. Understanding how mTLS works—and knowing where each certificate lives—gives you deeper visibility into the cluster's internal trust system and can help you troubleshoot or harden your deployment.&lt;/p&gt;

&lt;p&gt;This post will walk you through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What mTLS means in the Kubernetes control plane&lt;/li&gt;
&lt;li&gt;Which components use certificates and why&lt;/li&gt;
&lt;li&gt;Where those certificates live in your filesystem&lt;/li&gt;
&lt;li&gt;How to inspect and manage them in your self-hosted cluster&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔐 What is mTLS in Kubernetes?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;mTLS (Mutual TLS)&lt;/strong&gt; is a mechanism where both client and server authenticate each other using certificates. In Kubernetes, this isn't just used to encrypt traffic; it's also a crucial part of authenticating various components within the control plane.&lt;/p&gt;

&lt;p&gt;Let's look at a few examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When the &lt;strong&gt;kubelet&lt;/strong&gt; talks to the &lt;strong&gt;API server&lt;/strong&gt;, it presents a certificate proving it is a legitimate node&lt;/li&gt;
&lt;li&gt;When the &lt;strong&gt;controller-manager&lt;/strong&gt; communicates with &lt;strong&gt;etcd&lt;/strong&gt;, both use TLS certificates to verify identity&lt;/li&gt;
&lt;li&gt;Even users interacting with the cluster via &lt;strong&gt;kubectl&lt;/strong&gt; often use certificates behind the scenes (depending on your setup)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This secure, verified communication is enforced using X.509 certificates, which are generated during cluster setup (commonly via &lt;code&gt;kubeadm&lt;/code&gt;) and stored in predictable locations in the filesystem.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧩 Kubernetes Components Using Certificates (mTLS)
&lt;/h2&gt;

&lt;p&gt;Here's a breakdown of the key components and where their TLS certificates typically live in a self-managed (kubeadm-based) cluster:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Certificate Path&lt;/th&gt;
&lt;th&gt;Purpose / Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;API Server&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;/etc/kubernetes/pki/apiserver.crt&lt;/code&gt; &amp;amp; &lt;code&gt;.key&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Server cert for API requests&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;API Server CA&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;/etc/kubernetes/pki/ca.crt&lt;/code&gt; &amp;amp; &lt;code&gt;.key&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Root CA that signs other certs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Controller Manager&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;/etc/kubernetes/pki/controller-manager.crt&lt;/code&gt; &amp;amp; &lt;code&gt;.key&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Used to talk to API server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scheduler&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;/etc/kubernetes/pki/scheduler.crt&lt;/code&gt; &amp;amp; &lt;code&gt;.key&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Talks to API server securely&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Etcd (server)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;/etc/kubernetes/pki/etcd/server.crt&lt;/code&gt; &amp;amp; &lt;code&gt;.key&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;API server &amp;amp; etcd authenticate each other&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Etcd (peer)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;/etc/kubernetes/pki/etcd/peer.crt&lt;/code&gt; &amp;amp; &lt;code&gt;.key&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;For peer-to-peer etcd clustering&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Etcd CA&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;/etc/kubernetes/pki/etcd/ca.crt&lt;/code&gt; &amp;amp; &lt;code&gt;.key&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Separate CA from main API server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Kubelet&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;/var/lib/kubelet/pki/kubelet-client.crt&lt;/code&gt; &amp;amp; &lt;code&gt;.key&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Authenticates with API server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Kube-Proxy&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;/etc/kubernetes/pki/kube-proxy.crt&lt;/code&gt; &amp;amp; &lt;code&gt;.key&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Talks to API server for service updates&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Front Proxy&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/etc/kubernetes/pki/front-proxy-client.crt&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;For API aggregation layer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Admin User (kubectl)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/etc/kubernetes/admin.conf&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Includes user certs for API access&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🛠️ Who Issues These Certificates?
&lt;/h2&gt;

&lt;p&gt;In self-managed Kubernetes clusters, especially those initialized with &lt;code&gt;kubeadm&lt;/code&gt;, the certificate lifecycle is handled during setup — and it's your responsibility to maintain them over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔐 CA (Certificate Authority)
&lt;/h3&gt;

&lt;p&gt;When you run &lt;code&gt;kubeadm init&lt;/code&gt;, it generates a root Certificate Authority (CA) stored at:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/etc/kubernetes/pki/ca.crt
/etc/kubernetes/pki/ca.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This CA is then used to sign the server and client certificates for all core components (API server, controller manager, scheduler, etc.).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;There's also a separate CA for etcd:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/etc/kubernetes/pki/etcd/ca.crt
/etc/kubernetes/pki/etcd/ca.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ⚙️ Inspecting the Certificates
&lt;/h3&gt;

&lt;p&gt;You can list all certs created by kubeadm using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm certs list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And you can check expiration dates with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm certs check-expiration
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll typically see that most certs expire in &lt;strong&gt;1 year&lt;/strong&gt; by default.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔁 Renewing Certificates
&lt;/h3&gt;

&lt;p&gt;To renew all expiring certs (except kubelet), you can run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm certs renew all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The kubelet uses automatic rotation, but only if the &lt;code&gt;RotateKubeletClientCertificate&lt;/code&gt; setting is enabled in &lt;code&gt;/var/lib/kubelet/config.yaml&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🔁 mTLS in Action: Example Flow
&lt;/h2&gt;

&lt;p&gt;Let's take a concrete example: &lt;strong&gt;how the kubelet authenticates with the API server&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;kubelet&lt;/strong&gt; starts up and wants to register the node with the cluster&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It presents its client certificate (&lt;code&gt;/var/lib/kubelet/pki/kubelet-client.crt&lt;/code&gt;) to the &lt;strong&gt;API server&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;API server&lt;/strong&gt; checks:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Is the cert signed by the CA (&lt;code&gt;ca.crt&lt;/code&gt;) it trusts?&lt;/li&gt;
&lt;li&gt;Is it expired?&lt;/li&gt;
&lt;li&gt;Does it belong to a valid node?&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;If valid, the kubelet is authenticated and proceeds to perform actions like:

&lt;ul&gt;
&lt;li&gt;Registering the node&lt;/li&gt;
&lt;li&gt;Posting status updates&lt;/li&gt;
&lt;li&gt;Watching for Pod objects&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This exact mTLS exchange also happens between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API server ↔ controller-manager&lt;/li&gt;
&lt;li&gt;API server ↔ scheduler&lt;/li&gt;
&lt;li&gt;API server ↔ etcd&lt;/li&gt;
&lt;li&gt;etcd ↔ etcd peers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each time, both sides present certificates and verify identities before any communication happens — enforcing strict trust boundaries within the control plane.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔎 How to Inspect Kubernetes Certificates Manually
&lt;/h2&gt;

&lt;p&gt;Sometimes you'll want to manually inspect a certificate — whether for troubleshooting, verifying SANs, or checking expiration dates. You can use &lt;code&gt;openssl&lt;/code&gt; for this.&lt;/p&gt;

&lt;h3&gt;
  
  
  📂 Example: Inspect the API Server certificate
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;openssl x509 &lt;span class="nt"&gt;-in&lt;/span&gt; /etc/kubernetes/pki/apiserver.crt &lt;span class="nt"&gt;-text&lt;/span&gt; &lt;span class="nt"&gt;-noout&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will output details like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Issuer&lt;/strong&gt; – usually &lt;code&gt;kubernetes-ca&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validity&lt;/strong&gt; – Not Before / Not After (expiration)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subject&lt;/strong&gt; – e.g., &lt;code&gt;CN=kube-apiserver&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;X509v3 Subject Alternative Name&lt;/strong&gt; – e.g., IPs and DNS names like &lt;code&gt;kubernetes&lt;/code&gt;, &lt;code&gt;kubernetes.default&lt;/code&gt;, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  📂 Example: Check kubelet certificate
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;openssl x509 &lt;span class="nt"&gt;-in&lt;/span&gt; /var/lib/kubelet/pki/kubelet-client.crt &lt;span class="nt"&gt;-text&lt;/span&gt; &lt;span class="nt"&gt;-noout&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This helps confirm:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which node the cert is tied to&lt;/li&gt;
&lt;li&gt;If rotation has occurred&lt;/li&gt;
&lt;li&gt;Whether the cert has expired or is about to expire&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🛡️ Security &amp;amp; Hardening Tips
&lt;/h2&gt;

&lt;p&gt;When you manage your own Kubernetes control plane, here are a few things to keep in mind:&lt;/p&gt;

&lt;h3&gt;
  
  
  🔄 Rotate Certificates Regularly
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Certs expire after &lt;strong&gt;1 year&lt;/strong&gt; by default&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;kubeadm certs renew&lt;/code&gt; or set up automation&lt;/li&gt;
&lt;li&gt;Monitor expiration with:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubeadm certs check-expiration
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🔐 Protect Your CA Keys
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Keep &lt;code&gt;/etc/kubernetes/pki/ca.key&lt;/code&gt; and &lt;code&gt;/etc/kubernetes/pki/etcd/ca.key&lt;/code&gt; strictly restricted&lt;/li&gt;
&lt;li&gt;Never expose CA keys to unauthorized users or automation tools&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🚫 Never Expose etcd Publicly
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;etcd should &lt;strong&gt;never&lt;/strong&gt; be exposed to the public internet&lt;/li&gt;
&lt;li&gt;Ensure it only listens on localhost or secure internal networks&lt;/li&gt;
&lt;li&gt;Always enforce client certs (&lt;code&gt;--cert-file&lt;/code&gt;, &lt;code&gt;--key-file&lt;/code&gt;, &lt;code&gt;--client-cert-auth&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔍 Audit TLS Flags in Static Pod Manifests
&lt;/h3&gt;

&lt;p&gt;Check &lt;code&gt;/etc/kubernetes/manifests/&lt;/code&gt; for your API server pod config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--tls-cert-file=/etc/kubernetes/pki/apiserver.crt&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--tls-private-key-file=/etc/kubernetes/pki/apiserver.key&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--client-ca-file=/etc/kubernetes/pki/ca.crt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ensure all these values are set and point to correct, non-expired certs.&lt;/p&gt;




&lt;h2&gt;
  
  
  ✅ Conclusion
&lt;/h2&gt;

&lt;p&gt;Mutual TLS is at the heart of Kubernetes' internal security model — silently ensuring that every component talks only to what it can trust. In cloud-managed clusters, this is handled for you, but in self-managed setups, &lt;strong&gt;you're the certificate authority&lt;/strong&gt;, quite literally.&lt;/p&gt;

&lt;p&gt;In this post, you learned:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What mTLS is and why it matters in Kubernetes&lt;/li&gt;
&lt;li&gt;Which components use it and where the certs are stored&lt;/li&gt;
&lt;li&gt;How certificates are issued and rotated&lt;/li&gt;
&lt;li&gt;How to inspect and secure them&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Understanding these mechanics gives you a strong edge as a DevOps engineer, especially when operating production clusters or preparing for certifications like the &lt;strong&gt;CKA&lt;/strong&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;What's your experience with managing certificates in Kubernetes? Have you encountered any certificate-related issues in your clusters? Share your thoughts in the comments below!&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Deploy Your First Web App with AWS App Runner: Fully Managed &amp; Container-Ready</title>
      <dc:creator>Parimal </dc:creator>
      <pubDate>Wed, 30 Jul 2025 08:06:58 +0000</pubDate>
      <link>https://forem.com/parimal5/deploy-your-first-web-app-with-aws-app-runner-fully-managed-container-ready-127g</link>
      <guid>https://forem.com/parimal5/deploy-your-first-web-app-with-aws-app-runner-fully-managed-container-ready-127g</guid>
      <description>&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;What is AWS App Runner?&lt;/li&gt;
&lt;li&gt;Why Use App Runner?&lt;/li&gt;
&lt;li&gt;When to Use App Runner&lt;/li&gt;
&lt;li&gt;Pre-requisites&lt;/li&gt;
&lt;li&gt;Deployment Guide&lt;/li&gt;
&lt;li&gt;Observability &amp;amp; Scaling&lt;/li&gt;
&lt;li&gt;Pricing Overview&lt;/li&gt;
&lt;li&gt;Pros &amp;amp; Limitations&lt;/li&gt;
&lt;li&gt;Final Thoughts&lt;/li&gt;
&lt;li&gt;Further Reading&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is AWS App Runner?
&lt;/h2&gt;

&lt;p&gt;AWS App Runner is a &lt;strong&gt;fully managed service&lt;/strong&gt; that makes it easy to deploy containerized applications from your source code or container image. You don't need to manage servers, orchestration, or scaling logic—App Runner handles it all.&lt;/p&gt;

&lt;p&gt;Whether your app lives in a GitHub repo or a container registry (like Amazon ECR), App Runner can automatically build and deploy it, scaling up and down as needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use App Runner?
&lt;/h2&gt;

&lt;p&gt;Compared to traditional services like EC2, ECS, or even Lambda, App Runner offers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🛠️ &lt;strong&gt;Zero infrastructure management&lt;/strong&gt; – No need to manage VPCs, load balancers, or clusters&lt;/li&gt;
&lt;li&gt;🚀 &lt;strong&gt;Quick deployments&lt;/strong&gt; – Deploy from GitHub or ECR in a few steps&lt;/li&gt;
&lt;li&gt;📈 &lt;strong&gt;Auto-scaling&lt;/strong&gt; – Automatically handles traffic spikes&lt;/li&gt;
&lt;li&gt;🔁 &lt;strong&gt;CI/CD support&lt;/strong&gt; – Automatic redeploys on every code change (when using GitHub)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's especially useful when you need to move fast with a minimal setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use App Runner
&lt;/h2&gt;

&lt;p&gt;✅ &lt;strong&gt;Perfect for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want to deploy containerized apps but don't want to manage ECS or Kubernetes&lt;/li&gt;
&lt;li&gt;You need rapid prototyping or dev environments&lt;/li&gt;
&lt;li&gt;You're building web APIs or frontend services&lt;/li&gt;
&lt;li&gt;You prefer GitHub-to-production workflows with minimal infrastructure fuss&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;p&gt;Before we begin, make sure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] An AWS account&lt;/li&gt;
&lt;li&gt;[ ] A public GitHub repository with a working web app and Dockerfile
&lt;strong&gt;OR&lt;/strong&gt; A container image in Amazon ECR (Elastic Container Registry)&lt;/li&gt;
&lt;li&gt;[ ] Basic knowledge of Docker and containers&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deployment Guide
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🚀 Deploying a Sample App with App Runner (Using GitHub)
&lt;/h3&gt;

&lt;p&gt;Let's deploy a simple Node.js or Python Flask app (you can use any stack really) via GitHub:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample Application:&lt;/strong&gt; &lt;a href="https://github.com/parimal5/AWS-App-Runner" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Open AWS App Runner Console
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcg74d6t75ys9074ive3m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcg74d6t75ys9074ive3m.png" alt="AWS App Runner Console Dashboard" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to the &lt;a href="https://console.aws.amazon.com/apprunner/" rel="noopener noreferrer"&gt;App Runner console&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Create Service
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb15e48h45yw4nqp7rreu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb15e48h45yw4nqp7rreu.png" alt="Create App Runner Service" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click &lt;strong&gt;"Create service"&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;"Source code repository"&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;GitHub&lt;/strong&gt; and connect your account&lt;/li&gt;
&lt;li&gt;Pick your repository and branch&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 3: Configure Build &amp;amp; Deploy
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foarrjoqnfwe0s2rtmkwe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foarrjoqnfwe0s2rtmkwe.png" alt="Configure Build &amp;amp; Deploy Settings" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;"Runtime"&lt;/strong&gt; based on your application&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Build command&lt;/strong&gt; and &lt;strong&gt;Start command&lt;/strong&gt; based on your application&lt;/li&gt;
&lt;li&gt;Set port (e.g., &lt;code&gt;5000&lt;/code&gt; for Flask, &lt;code&gt;3000&lt;/code&gt; for Node.js)&lt;/li&gt;
&lt;li&gt;(Optional) Add environment variables if needed&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 4: Service Settings
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5rfu0nfp4wjm8bf8dat.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5rfu0nfp4wjm8bf8dat.png" alt="App Runner Service Settings" width="800" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set service name&lt;/li&gt;
&lt;li&gt;Choose CPU and memory settings&lt;/li&gt;
&lt;li&gt;Auto scaling&lt;/li&gt;
&lt;li&gt;Observability&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 5: Deploy!
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Review and click &lt;strong&gt;"Create &amp;amp; deploy"&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Wait for App Runner to build and deploy your app&lt;/li&gt;
&lt;li&gt;Once deployed, you'll get a public HTTPS URL 🎉&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjhw912sza2gtm4stwfic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjhw912sza2gtm4stwfic.png" alt="App Runner Deployment Success" width="800" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Observability &amp;amp; Scaling
&lt;/h2&gt;

&lt;p&gt;App Runner provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Integrated CloudWatch Logs&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Health checks&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-scaling&lt;/strong&gt; based on requests per second&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metrics&lt;/strong&gt; like CPU, memory usage, and request counts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can monitor your app without additional tooling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing Overview
&lt;/h2&gt;

&lt;p&gt;💸 App Runner charges based on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Compute time&lt;/strong&gt; (vCPU + memory usage)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Requests served&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; A small app with light traffic will cost only a few dollars a month. There's even a free tier!&lt;/p&gt;

&lt;p&gt;For details: &lt;a href="https://aws.amazon.com/apprunner/pricing/" rel="noopener noreferrer"&gt;App Runner pricing&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Pros &amp;amp; Limitations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  👍 Pros
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;No infrastructure setup&lt;/li&gt;
&lt;li&gt;Auto SSL and HTTPS&lt;/li&gt;
&lt;li&gt;Integrated CI/CD&lt;/li&gt;
&lt;li&gt;Fast and developer-friendly&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  👎 Limitations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Limited to HTTP apps (not suitable for background jobs)&lt;/li&gt;
&lt;li&gt;No fine-grained networking control (compared to ECS/VPC)&lt;/li&gt;
&lt;li&gt;Limited AWS region support&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;AWS App Runner is a great fit for teams that want to focus on writing code, not managing infrastructure. Whether you're spinning up a side project, internal tool, or even small production service, it gets you from GitHub to live URL in minutes.&lt;/p&gt;

&lt;p&gt;It's not a one-size-fits-all solution (you'll want ECS or Kubernetes for complex workflows), but for many use cases, it's just right.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/apprunner/" rel="noopener noreferrer"&gt;📚 App Runner Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/apprunner/latest/dg/service-source-code.html" rel="noopener noreferrer"&gt;🔗 Deploying from GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/apprunner/latest/dg/service-source-image.html" rel="noopener noreferrer"&gt;📦 Deploying from Amazon ECR&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Made with ❤️ for the developer community&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Found this helpful? Give it a ⭐ and share with your fellow developers!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>webdev</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Setting Up a Multi-Node Kubernetes Cluster with Kind on Windows</title>
      <dc:creator>Parimal </dc:creator>
      <pubDate>Tue, 29 Jul 2025 08:49:33 +0000</pubDate>
      <link>https://forem.com/parimal5/setting-up-a-multi-node-kubernetes-cluster-with-kind-on-windows-5fbb</link>
      <guid>https://forem.com/parimal5/setting-up-a-multi-node-kubernetes-cluster-with-kind-on-windows-5fbb</guid>
      <description>&lt;p&gt;Want to test Kubernetes workloads locally without the complexity of cloud setup? Kind (Kubernetes in Docker) is your answer. In this guide, I'll show you how to install Kind on Windows and create a multi-node cluster that's perfect for local development and testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You'll Achieve
&lt;/h2&gt;

&lt;p&gt;By the end of this tutorial, you'll have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Kind properly installed on Windows with WSL2&lt;/li&gt;
&lt;li&gt;✅ A working multi-node Kubernetes cluster (1 control-plane + 2 workers)&lt;/li&gt;
&lt;li&gt;✅ kubectl configured to manage your cluster&lt;/li&gt;
&lt;li&gt;✅ A solid foundation for Kubernetes experimentation&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why Choose Kind?
&lt;/h2&gt;

&lt;p&gt;Kind (Kubernetes IN Docker) stands out for local development because it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Starts fast&lt;/strong&gt; - Clusters spin up in under 2 minutes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runs lightweight&lt;/strong&gt; - Uses Docker containers instead of heavy VMs
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supports multi-node&lt;/strong&gt; - Test realistic cluster scenarios locally&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Matches production&lt;/strong&gt; - Runs actual Kubernetes, not a simulation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrates easily&lt;/strong&gt; - Works seamlessly with your existing Docker workflow&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we start, ensure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Windows 10/11&lt;/strong&gt; with WSL2 enabled&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Desktop&lt;/strong&gt; installed and running&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chocolatey&lt;/strong&gt; package manager&lt;/li&gt;
&lt;li&gt;Basic familiarity with command line&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 1: Install Chocolatey (If Not Already Installed)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://chocolatey.org/install" rel="noopener noreferrer"&gt;Chocolatey&lt;/a&gt; makes installing tools on Windows much easier. Run this in PowerShell as Administrator:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;Set-ExecutionPolicy&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Bypass&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Scope&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Process&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Force&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;System.Net.ServicePointManager&lt;/span&gt;&lt;span class="p"&gt;]::&lt;/span&gt;&lt;span class="n"&gt;SecurityProtocol&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;System.Net.ServicePointManager&lt;/span&gt;&lt;span class="p"&gt;]::&lt;/span&gt;&lt;span class="n"&gt;SecurityProtocol&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-bor&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;3072&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;iex&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;New-Object&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;System.Net.WebClient&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;DownloadString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'https://community.chocolatey.org/install.ps1'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;choco &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 2: Configure WSL2
&lt;/h2&gt;

&lt;p&gt;Ensure WSL2 is your default version for optimal Docker performance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Set WSL2 as default&lt;/span&gt;
wsl &lt;span class="nt"&gt;--set-default-version&lt;/span&gt; 2

&lt;span class="c"&gt;# Verify WSL2 is working&lt;/span&gt;
wsl &lt;span class="nt"&gt;--status&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 3: Install Docker Desktop
&lt;/h2&gt;

&lt;p&gt;Install Docker Desktop with Chocolatey:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;choco &lt;span class="nb"&gt;install &lt;/span&gt;docker-desktop &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; During setup, make sure to: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable WSL2 integration in Docker Desktop settings&lt;/li&gt;
&lt;li&gt;Restart Docker Desktop after configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Verify Docker is working:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nt"&gt;--version&lt;/span&gt;
docker run hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 4: Install kubectl and Kind
&lt;/h2&gt;

&lt;p&gt;Install both tools using Chocolatey:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install kubectl for cluster management&lt;/span&gt;
choco &lt;span class="nb"&gt;install &lt;/span&gt;kubernetes-cli &lt;span class="nt"&gt;-y&lt;/span&gt;

&lt;span class="c"&gt;# Install Kind&lt;/span&gt;
choco &lt;span class="nb"&gt;install &lt;/span&gt;kind &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify installations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl version &lt;span class="nt"&gt;--client&lt;/span&gt;
kind version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 5: Create Your First Multi-Node Cluster
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Create a Cluster Configuration
&lt;/h3&gt;

&lt;p&gt;Create a file named &lt;code&gt;kind-config.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cluster&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind.x-k8s.io/v1alpha4&lt;/span&gt;
&lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;control-plane&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simple configuration creates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 control-plane node (manages the cluster)&lt;/li&gt;
&lt;li&gt;2 worker nodes (run your applications)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Launch the Cluster
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kind create cluster &lt;span class="nt"&gt;--name&lt;/span&gt; my-cluster &lt;span class="nt"&gt;--config&lt;/span&gt; kind-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pull Kubernetes node images&lt;/li&gt;
&lt;li&gt;Create Docker containers for each node&lt;/li&gt;
&lt;li&gt;Set up networking between nodes&lt;/li&gt;
&lt;li&gt;Configure kubectl to connect to your cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Wait time:&lt;/strong&gt; Usually 1-2 minutes depending on your internet speed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 6: Verify Your Cluster
&lt;/h2&gt;

&lt;p&gt;Check that everything is working:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# View cluster information&lt;/span&gt;
kubectl cluster-info

&lt;span class="c"&gt;# List all nodes&lt;/span&gt;
kubectl get nodes

&lt;span class="c"&gt;# Check system pods are running&lt;/span&gt;
kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Expected output for &lt;code&gt;kubectl get nodes&lt;/code&gt;:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                       STATUS   ROLES           AGE   VERSION
my-cluster-control-plane   Ready    control-plane   2m    v1.28.0
my-cluster-worker          Ready    &amp;lt;none&amp;gt;          1m    v1.28.0
my-cluster-worker2         Ready    &amp;lt;none&amp;gt;          1m    v1.28.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Quick Test: Deploy a Simple Pod
&lt;/h2&gt;

&lt;p&gt;Let's confirm your cluster works by running a simple nginx pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a test pod&lt;/span&gt;
kubectl run test-nginx &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80

&lt;span class="c"&gt;# Check the pod is running&lt;/span&gt;
kubectl get pods

&lt;span class="c"&gt;# Clean up&lt;/span&gt;
kubectl delete pod test-nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Essential Kind Commands
&lt;/h2&gt;

&lt;p&gt;Here are the key commands you'll use regularly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# List all Kind clusters&lt;/span&gt;
kind get clusters

&lt;span class="c"&gt;# Delete a cluster&lt;/span&gt;
kind delete cluster &lt;span class="nt"&gt;--name&lt;/span&gt; my-cluster

&lt;span class="c"&gt;# Create cluster with different name&lt;/span&gt;
kind create cluster &lt;span class="nt"&gt;--name&lt;/span&gt; dev-cluster

&lt;span class="c"&gt;# Switch between clusters (if you have multiple)&lt;/span&gt;
kubectl config get-contexts
kubectl config use-context kind-my-cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Troubleshooting Common Issues
&lt;/h2&gt;

&lt;h3&gt;
  
  
  "Cannot connect to Docker daemon"
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Ensure Docker Desktop is running&lt;/li&gt;
&lt;li&gt;Check WSL2 integration is enabled in Docker settings&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cluster creation hangs
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clean up and retry&lt;/span&gt;
kind delete cluster &lt;span class="nt"&gt;--name&lt;/span&gt; my-cluster
docker system prune &lt;span class="nt"&gt;-f&lt;/span&gt;
kind create cluster &lt;span class="nt"&gt;--name&lt;/span&gt; my-cluster &lt;span class="nt"&gt;--config&lt;/span&gt; kind-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  kubectl context issues
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check current context&lt;/span&gt;
kubectl config current-context

&lt;span class="c"&gt;# Should show: kind-my-cluster&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;p&gt;Now that you have Kind installed and a working cluster, you're ready to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy applications and test them locally&lt;/li&gt;
&lt;li&gt;Experiment with Kubernetes features safely&lt;/li&gt;
&lt;li&gt;Set up CI/CD pipelines&lt;/li&gt;
&lt;li&gt;Learn about networking, storage, and security&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In upcoming posts, I'll cover deploying applications, setting up ingress controllers, and advanced Kind configurations.&lt;/p&gt;




&lt;h2&gt;
  
  
  Cleanup
&lt;/h2&gt;

&lt;p&gt;When you're done experimenting, clean up your resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Delete the cluster&lt;/span&gt;
kind delete cluster &lt;span class="nt"&gt;--name&lt;/span&gt; my-cluster

&lt;span class="c"&gt;# Verify cleanup&lt;/span&gt;
kind get clusters
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Got Kind up and running?&lt;/strong&gt; You now have a powerful local Kubernetes environment at your fingertips! What's the first thing you're planning to deploy on your new cluster? Let me know in the comments below.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
