<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Naveen Jayachandran</title>
    <description>The latest articles on Forem by Naveen Jayachandran (@naveen_jayachandran).</description>
    <link>https://forem.com/naveen_jayachandran</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/naveen_jayachandran"/>
    <language>en</language>
    <item>
      <title>Kubernetes – Creating a ReplicaSet</title>
      <dc:creator>Naveen Jayachandran</dc:creator>
      <pubDate>Mon, 03 Nov 2025 17:29:11 +0000</pubDate>
      <link>https://forem.com/naveen_jayachandran/kubernetes-creating-a-replicaset-3146</link>
      <guid>https://forem.com/naveen_jayachandran/kubernetes-creating-a-replicaset-3146</guid>
      <description>&lt;p&gt;A ReplicaSet is a core Kubernetes controller designed to ensure that a specified number of identical Pods, called replicas, are running at all times. It serves as a self-healing mechanism — if any Pod fails, crashes, or is accidentally deleted, the ReplicaSet automatically creates a replacement to maintain the desired count. This guarantees high availability, scalability, and reliability for applications running in Kubernetes.&lt;/p&gt;

&lt;p&gt;Purpose of a ReplicaSet&lt;br&gt;
The main objectives of a ReplicaSet are to maintain application stability, availability, and scalability.&lt;/p&gt;

&lt;p&gt;High Availability: A ReplicaSet maintains a consistent number of running Pods. Even if a node or Pod fails, others remain available to serve traffic, ensuring zero downtime.&lt;/p&gt;

&lt;p&gt;Load Balancing: When used with a Kubernetes Service, a ReplicaSet distributes traffic evenly across all its Pods. As replicas scale up or down, the Service dynamically adjusts to maintain balanced traffic distribution.&lt;/p&gt;

&lt;p&gt;Scalability: You can easily adjust the number of replicas by modifying the replicas field in the ReplicaSet specification. The controller automatically creates or removes Pods to match the updated count.&lt;/p&gt;

&lt;p&gt;How ReplicaSets Improved Over Replication Controllers&lt;br&gt;
ReplicaSets are the modern replacement for the older Replication Controller. The key improvement lies in label selectors:&lt;/p&gt;

&lt;p&gt;Replication Controller: Uses equality-based selectors, matching Pods with exact key-value label pairs (e.g., app: frontend). This is quite restrictive.&lt;/p&gt;

&lt;p&gt;ReplicaSet: Uses set-based selectors, allowing more expressive selection logic. For example, it can select Pods where a label value exists or belongs to a specific set of values. &lt;strong&gt;Example:&lt;/strong&gt;matchExpressions: - key: environment operator: In values: - production - qa This allows a ReplicaSet to manage Pods with labels environment=production or environment=qa.&lt;/p&gt;

&lt;p&gt;Example: ReplicaSet Manifest&lt;br&gt;
apiVersion: apps/v1&lt;br&gt;
kind: ReplicaSet&lt;br&gt;
metadata:&lt;br&gt;
  name: nginx-replicaset&lt;br&gt;
spec:&lt;br&gt;
  replicas: 2&lt;br&gt;
  selector:&lt;br&gt;
    matchLabels:&lt;br&gt;
      app: nginx-rs-pod&lt;br&gt;
    matchExpressions:&lt;br&gt;
    - key: env&lt;br&gt;
      operator: In&lt;br&gt;
      values:&lt;br&gt;
      - dev&lt;br&gt;
  template:&lt;br&gt;
    metadata:&lt;br&gt;
      labels:&lt;br&gt;
        app: nginx-rs-pod&lt;br&gt;
        env: dev&lt;br&gt;
    spec:&lt;br&gt;
      containers:&lt;br&gt;
      - name: nginx&lt;br&gt;
        image: nginx&lt;br&gt;
        ports:&lt;br&gt;
        - containerPort: 80&lt;br&gt;
Non-Template Pod Acquisition&lt;br&gt;
A ReplicaSet can also adopt existing Pods that match its selectors, even if it didn’t originally create them. This process is called non-template Pod acquisition.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;apiVersion: apps/v1&lt;br&gt;
kind: ReplicaSet&lt;br&gt;
metadata:&lt;br&gt;
  name: first-replicaset&lt;br&gt;
spec:&lt;br&gt;
  selector:&lt;br&gt;
    matchLabels:&lt;br&gt;
      app: web-app&lt;br&gt;
  replicas: 5&lt;br&gt;
Any Pod with the label app: web-app will be managed by this ReplicaSet.&lt;/p&gt;

&lt;p&gt;Working with ReplicaSets&lt;br&gt;
Step 1: Create the YAML File&lt;br&gt;
Define your ReplicaSet with desired configurations such as the number of replicas, labels, and container specifications.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;apiVersion: apps/v1&lt;br&gt;
kind: ReplicaSet&lt;br&gt;
metadata:&lt;br&gt;
  name: my-replicaset&lt;br&gt;
spec:&lt;br&gt;
  replicas: 3&lt;br&gt;
  selector:&lt;br&gt;
    matchLabels:&lt;br&gt;
      app: my-app&lt;br&gt;
  template:&lt;br&gt;
    metadata:&lt;br&gt;
      labels:&lt;br&gt;
        app: my-app&lt;br&gt;
    spec:&lt;br&gt;
      containers:&lt;br&gt;
      - name: my-container&lt;br&gt;
        image: my-app:latest&lt;br&gt;
        ports:&lt;br&gt;
        - containerPort: 80&lt;br&gt;
Step 2: Create the ReplicaSet&lt;br&gt;
kubectl create -f replicaset.yaml&lt;br&gt;
Step 3: Verify Creation&lt;br&gt;
kubectl get replicasets&lt;br&gt;
Step 4: View Details&lt;br&gt;
kubectl describe replicaset my-replicaset&lt;br&gt;
Deleting ReplicaSets and Pods&lt;br&gt;
Delete a ReplicaSet&lt;br&gt;
kubectl delete rs &lt;br&gt;
This removes the ReplicaSet and all managed Pods.&lt;/p&gt;

&lt;p&gt;Delete Pods Independently&lt;br&gt;
kubectl delete pods --selector &lt;br&gt;
You can delete specific Pods without deleting the ReplicaSet. The ReplicaSet will recreate them to maintain the replica count.&lt;/p&gt;

&lt;p&gt;Isolating Pods from a ReplicaSet&lt;br&gt;
To exclude a Pod from ReplicaSet management, change its label so it no longer matches the ReplicaSet’s selector.&lt;/p&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;p&gt;List Pods:kubectl get pods&lt;/p&gt;

&lt;p&gt;Edit Pod labels:kubectl edit pod &lt;/p&gt;

&lt;p&gt;Apply the updated Pod configuration:kubectl apply -f .yaml&lt;/p&gt;

&lt;p&gt;Scaling a ReplicaSet&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Manual Scaling
You can scale a ReplicaSet manually using:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;kubectl scale rs  --replicas=5&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Automatic Scaling with HPA
You can attach a Horizontal Pod Autoscaler (HPA) to automatically adjust replicas based on metrics like CPU utilization.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;apiVersion: apps/v1&lt;br&gt;
kind: ReplicaSet&lt;br&gt;
metadata:&lt;br&gt;
  name: mavenwebapprc&lt;br&gt;
  namespace: test-ns&lt;br&gt;
spec:&lt;br&gt;
  replicas: 2&lt;br&gt;
  selector:&lt;br&gt;
    matchLabels:&lt;br&gt;
      app: mavenwebapp&lt;br&gt;
  template:&lt;br&gt;
    metadata:&lt;br&gt;
      labels:&lt;br&gt;
        app: mavenwebapp&lt;br&gt;
    spec:&lt;br&gt;
      containers:&lt;br&gt;
      - name: mavenwebapp&lt;br&gt;
        image: dockerhandson/maven-web-application:1&lt;br&gt;
        ports:&lt;br&gt;
        - containerPort: 8080&lt;br&gt;
  metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;type: Resource
resource:
  name: cpu
  targetAverageUtilization: 80
When CPU utilization reaches 80%, the HPA scales the replicas up to the configured maximum limit (for example, four Pods).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Difference Between ReplicaSet and ReplicationController&lt;br&gt;
Feature ReplicaSet  ReplicationController&lt;br&gt;
Purpose Modern controller ensuring desired number of Pods are running   Older mechanism managing Pod lifecycles&lt;br&gt;
Selector Type   Supports set-based selectors    Supports only equality-based selectors&lt;br&gt;
Flexibility More expressive and powerful matching logic Limited matching capabilities&lt;br&gt;
Status  Successor to ReplicationController  Deprecated for most use cases&lt;br&gt;
Difference Between ReplicaSet and DaemonSet&lt;br&gt;
Feature ReplicaSet  DaemonSet&lt;br&gt;
Pod Distribution    Ensures a fixed number of Pods run across the cluster   Ensures one Pod runs on each node&lt;br&gt;
Use Case    Best for stateless applications like web servers    Ideal for stateful or system-level apps like log collectors or monitoring agents&lt;br&gt;
Pod Replacement Recreates a Pod when one is deleted Automatically deploys a Pod to every new node in the cluster&lt;br&gt;
Scaling Manually or automatically scaled    One Pod per node by design&lt;br&gt;
Summary&lt;br&gt;
A ReplicaSet is the foundation of Kubernetes scalability and reliability. It ensures your applications stay highly available by maintaining the right number of Pods. While Deployments are typically used to manage ReplicaSets (for rolling updates and version control), understanding ReplicaSets helps you grasp how Kubernetes ensures continuous and consistent application availability.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>azure</category>
      <category>devops</category>
    </item>
    <item>
      <title>Kubernetes Deployment</title>
      <dc:creator>Naveen Jayachandran</dc:creator>
      <pubDate>Mon, 03 Nov 2025 17:28:04 +0000</pubDate>
      <link>https://forem.com/naveen_jayachandran/kubernetes-deployment-1g5p</link>
      <guid>https://forem.com/naveen_jayachandran/kubernetes-deployment-1g5p</guid>
      <description>&lt;p&gt;A Kubernetes Deployment is a higher-level abstraction used to manage and scale containerized applications while ensuring they remain in the desired operational state. It provides a declarative way to specify how many Pods should run, which container images they should use, and how updates or rollbacks should occur — all without downtime.&lt;/p&gt;

&lt;p&gt;Key Capabilities of a Deployment&lt;br&gt;
With a Deployment, you can:&lt;/p&gt;

&lt;p&gt;Scale applications dynamically based on workload.&lt;/p&gt;

&lt;p&gt;Maintain availability by ensuring the specified number of Pods are always healthy and running.&lt;/p&gt;

&lt;p&gt;Perform rolling updates to deploy new versions seamlessly.&lt;/p&gt;

&lt;p&gt;Rollback easily if a deployment introduces issues.&lt;/p&gt;

&lt;p&gt;Automate self-healing, ensuring that failed Pods are recreated automatically.&lt;/p&gt;

&lt;p&gt;Think of a Deployment as both a blueprint and a controller for Pods — it simplifies and automates most aspects of application lifecycle management in Kubernetes.&lt;/p&gt;

&lt;p&gt;Common Use Cases&lt;br&gt;
Kubernetes Deployments are widely used for managing application lifecycles. Common scenarios include:&lt;/p&gt;

&lt;p&gt;Rolling out new applications: Create a Deployment that launches a ReplicaSet, which in turn provisions Pods. You can monitor rollout progress using deployment status commands.&lt;/p&gt;

&lt;p&gt;Seamless application updates: Modify the PodTemplateSpec to trigger a new ReplicaSet. The Deployment automatically scales up the new version while gradually scaling down the old one — ensuring zero downtime.&lt;/p&gt;

&lt;p&gt;Rollback to previous versions: If an update introduces instability, roll back to an earlier revision easily.&lt;/p&gt;

&lt;p&gt;Dynamic scaling: Adjust the replica count manually or automatically using autoscalers to handle traffic fluctuations.&lt;/p&gt;

&lt;p&gt;Pausing and resuming rollouts: Pause a rollout to batch multiple updates together and resume when ready.&lt;/p&gt;

&lt;p&gt;Monitoring rollout progress: Check rollout status to confirm whether updates are progressing smoothly or stuck.&lt;/p&gt;

&lt;p&gt;Resource cleanup: Automatically remove obsolete ReplicaSets to maintain cluster efficiency.&lt;/p&gt;

&lt;p&gt;Core Components of a Deployment&lt;br&gt;
A Kubernetes Deployment consists of three key parts:&lt;/p&gt;

&lt;p&gt;Metadata: Includes the name and labels. Labels establish relationships between Deployments, ReplicaSets, and Services.&lt;/p&gt;

&lt;p&gt;Specification (spec): Defines:&lt;/p&gt;

&lt;p&gt;Number of replicas (Pods)&lt;/p&gt;

&lt;p&gt;Selector labels&lt;/p&gt;

&lt;p&gt;Pod template (template) The Pod template includes container specifications such as:&lt;/p&gt;

&lt;p&gt;Container name&lt;/p&gt;

&lt;p&gt;Image to use&lt;/p&gt;

&lt;p&gt;Ports to expose&lt;/p&gt;

&lt;p&gt;Resource limits (CPU, memory)&lt;/p&gt;

&lt;p&gt;Status: Automatically maintained by Kubernetes. It reflects the current state of the Deployment and enables self-healing. If the actual and desired states differ, Kubernetes reconciles them automatically.&lt;/p&gt;

&lt;p&gt;Example: Nginx Deployment YAML&lt;br&gt;
apiVersion: apps/v1&lt;br&gt;
kind: Deployment&lt;br&gt;
metadata:&lt;br&gt;
  name: nginx&lt;br&gt;
spec:&lt;br&gt;
  selector:&lt;br&gt;
    matchLabels:&lt;br&gt;
      app: nginx&lt;br&gt;
  template:&lt;br&gt;
    metadata:&lt;br&gt;
      labels:&lt;br&gt;
        app: nginx&lt;br&gt;
    spec:&lt;br&gt;
      containers:&lt;br&gt;
      - name: nginx&lt;br&gt;
        image: nginx&lt;br&gt;
        resources:&lt;br&gt;
          limits:&lt;br&gt;
            memory: "128Mi"&lt;br&gt;
            cpu: "500m"&lt;br&gt;
        ports:&lt;br&gt;
        - containerPort: 80&lt;br&gt;
Commands:&lt;/p&gt;

&lt;h1&gt;
  
  
  Create the Deployment
&lt;/h1&gt;

&lt;p&gt;kubectl apply -f nginx.yaml&lt;/p&gt;

&lt;h1&gt;
  
  
  Check status
&lt;/h1&gt;

&lt;p&gt;kubectl get all&lt;br&gt;
This will successfully create and deploy an Nginx application on your cluster.&lt;/p&gt;

&lt;p&gt;Updating a Deployment&lt;br&gt;
You can update Deployments in two main ways:&lt;/p&gt;

&lt;p&gt;Method 1: Edit Using kubectl&lt;br&gt;
kubectl edit deployment &lt;br&gt;
This opens the configuration in your terminal. Make the changes (press i to insert), then save and exit (Esc + :wq).&lt;/p&gt;

&lt;p&gt;Method 2: Update the YAML File&lt;br&gt;
Edit the YAML file directly (e.g., change container port from 80 to 8000), and reapply:&lt;/p&gt;

&lt;p&gt;kubectl apply -f nginx.yaml&lt;br&gt;
Rolling Back a Deployment&lt;br&gt;
If an update causes problems, you can easily revert to a previous version.&lt;/p&gt;

&lt;p&gt;Steps:&lt;br&gt;
&lt;strong&gt;List all revisions:&lt;/strong&gt;kubectl rollout history deployment &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rollback to a previous revision:&lt;/strong&gt;kubectl rollout undo deployment/nginx-deployment --to-revision=1&lt;/p&gt;

&lt;p&gt;Validate: Always test your rollback strategy to ensure minimal downtime during real incidents.&lt;/p&gt;

&lt;p&gt;Viewing Rollout History&lt;br&gt;
kubectl rollout history deployment/&lt;br&gt;
To view details for a specific revision:&lt;/p&gt;

&lt;p&gt;kubectl rollout history deployment/web-app-deployment --revision=3&lt;br&gt;
This helps track configuration changes and revert if needed.&lt;/p&gt;

&lt;p&gt;Scaling a Deployment&lt;br&gt;
You can scale Deployments manually or automatically:&lt;/p&gt;

&lt;p&gt;Manual Scaling:&lt;br&gt;
kubectl scale deployment/tomcat-deployment --replicas=5&lt;br&gt;
Autoscaling:&lt;br&gt;
kubectl autoscale deployment/tomcat-deployment --min=5 --max=8 --cpu-percent=75&lt;br&gt;
Here:&lt;/p&gt;

&lt;p&gt;--min=5: Minimum Pods always running&lt;/p&gt;

&lt;p&gt;--max=8: Maximum Pods during high load&lt;/p&gt;

&lt;p&gt;--cpu-percent=75: Scaling threshold based on CPU usage&lt;/p&gt;

&lt;p&gt;Pausing and Resuming a Rollout&lt;br&gt;
Pause a rollout:&lt;br&gt;
kubectl rollout pause deployment/webapp-deployment&lt;br&gt;
Resume the rollout:&lt;br&gt;
kubectl rollout resume deployment/webapp-deployment&lt;br&gt;
You can also update the container image during a paused rollout:&lt;/p&gt;

&lt;p&gt;kubectl set image deployment/webapp-deployment webapp=webapp:2.1&lt;br&gt;
Deployment Status Phases&lt;br&gt;
Kubernetes reports various statuses for a Deployment:&lt;/p&gt;

&lt;p&gt;Status  Description&lt;br&gt;
Pending Deployment is initializing or waiting for resources.&lt;br&gt;
Progressing Deployment is rolling out changes or creating ReplicaSets.&lt;br&gt;
Succeeded   Deployment completed successfully.&lt;br&gt;
Failed  Deployment failed due to configuration or environment errors.&lt;br&gt;
Unknown API server cannot determine status or connection lost.&lt;br&gt;
Check rollout progress:&lt;/p&gt;

&lt;p&gt;kubectl rollout status deployment/&lt;br&gt;
Common Deployment Failures and Causes&lt;br&gt;
Failure Reason  Description&lt;br&gt;
Failed probes   Readiness or liveness probe misconfigured.&lt;br&gt;
Image pull errors   Incorrect image name or tag.&lt;br&gt;
Insufficient resources  Resource quota limits exceeded.&lt;br&gt;
Dependency issues   Service dependencies (like databases) unavailable.&lt;br&gt;
For detailed debugging:&lt;/p&gt;

&lt;p&gt;kubectl describe deployment &lt;br&gt;
Canary Deployments&lt;br&gt;
A Canary Deployment gradually introduces new versions of an application to a subset of users or Pods, allowing testing under real workloads before a full rollout.&lt;/p&gt;

&lt;p&gt;Example Approach:&lt;br&gt;
Deploy a new version to 50% of Pods while the rest continue serving the old version.&lt;/p&gt;

&lt;p&gt;Based on feedback or monitoring results, either:&lt;/p&gt;

&lt;p&gt;Roll out to all Pods, or&lt;/p&gt;

&lt;p&gt;Roll back to the stable version.&lt;/p&gt;

&lt;p&gt;Implementation Methods:&lt;/p&gt;

&lt;p&gt;Traffic Splitting using Istio or other service mesh tools.&lt;/p&gt;

&lt;p&gt;Blue-Green Deployment – maintain two environments (old and new) and switch traffic when ready.&lt;/p&gt;

&lt;p&gt;ReplicaSet vs Deployment&lt;br&gt;
ReplicaSet  Deployment&lt;br&gt;
Ensures the specified number of Pods are running.   Manages ReplicaSets and automates Pod lifecycle.&lt;br&gt;
Does not support rolling updates or rollbacks.  Supports both rolling updates and rollbacks.&lt;br&gt;
Handles Pods directly.  Handles ReplicaSets, which in turn manage Pods.&lt;br&gt;
Suitable for simple, static workloads.  Suitable for dynamic, frequently updated applications.&lt;br&gt;
Summary&lt;br&gt;
A Kubernetes Deployment simplifies managing applications by handling updates, scaling, rollbacks, and self-healing automatically. It abstracts away the complexity of direct Pod or ReplicaSet management, enabling you to define your application’s desired state and letting Kubernetes maintain it efficiently.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>azure</category>
      <category>aws</category>
    </item>
    <item>
      <title>Kubernetes – Difference Between ReplicaSet and Replication Controller</title>
      <dc:creator>Naveen Jayachandran</dc:creator>
      <pubDate>Mon, 03 Nov 2025 17:26:57 +0000</pubDate>
      <link>https://forem.com/naveen_jayachandran/kubernetes-difference-between-replicaset-and-replication-controller-36mg</link>
      <guid>https://forem.com/naveen_jayachandran/kubernetes-difference-between-replicaset-and-replication-controller-36mg</guid>
      <description>&lt;p&gt;Overview&lt;br&gt;
Kubernetes (K8s) is an open-source container orchestration platform initially developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF). It automates the deployment, scaling, and management of containerized applications.&lt;/p&gt;

&lt;p&gt;Kubernetes is available in two major forms:&lt;/p&gt;

&lt;p&gt;Kubernetes: The full-fledged version used in production environments.&lt;/p&gt;

&lt;p&gt;Minikube: A lightweight local version used for development and testing.&lt;/p&gt;

&lt;p&gt;Replication in Kubernetes&lt;br&gt;
Replication ensures that multiple instances of an application (Pods) are running simultaneously to maintain high availability, load balancing, and scalability.&lt;/p&gt;

&lt;p&gt;Key benefits of replication include:&lt;/p&gt;

&lt;p&gt;Reliability: Prevents downtime by maintaining a desired number of Pods.&lt;/p&gt;

&lt;p&gt;Load Balancing: Distributes traffic evenly among available Pods.&lt;/p&gt;

&lt;p&gt;Auto Scaling: Dynamically adjusts the number of Pods based on workload.&lt;/p&gt;

&lt;p&gt;Replication is especially useful in microservices architectures, cloud-native applications, and mobile backend systems.&lt;/p&gt;

&lt;p&gt;Replication Controller (RC)&lt;br&gt;
The Replication Controller is the original mechanism in Kubernetes for ensuring that a specified number of Pod replicas are running at all times. Although it has now been largely replaced by ReplicaSets, it’s still valuable to understand how it works.&lt;/p&gt;

&lt;p&gt;A Replication Controller continuously monitors its Pods and automatically replaces any that fail or get deleted. It can also scale Pods up or down and supports bulk operations like updates or deletions.&lt;/p&gt;

&lt;p&gt;Key characteristics:&lt;/p&gt;

&lt;p&gt;API Version: v1&lt;/p&gt;

&lt;p&gt;Kind: ReplicationController&lt;/p&gt;

&lt;p&gt;Defines:&lt;/p&gt;

&lt;p&gt;A name&lt;/p&gt;

&lt;p&gt;A replica count&lt;/p&gt;

&lt;p&gt;A Pod template (similar to a standalone Pod definition)&lt;/p&gt;

&lt;p&gt;The selector field (optional) determines which Pods are managed. If omitted, it defaults to the Pod template’s labels.&lt;/p&gt;

&lt;p&gt;ReplicaSet (RS)&lt;br&gt;
The ReplicaSet is the next-generation controller that supersedes the Replication Controller. It serves the same purpose—maintaining the desired number of Pods—but provides enhanced selector capabilities and is typically managed by Deployments for versioned rollouts.&lt;/p&gt;

&lt;p&gt;Key characteristics:&lt;/p&gt;

&lt;p&gt;API Version: apps/v1&lt;/p&gt;

&lt;p&gt;Kind: ReplicaSet&lt;/p&gt;

&lt;p&gt;Requires:&lt;/p&gt;

&lt;p&gt;A name&lt;/p&gt;

&lt;p&gt;A replica count&lt;/p&gt;

&lt;p&gt;A selector (mandatory)&lt;/p&gt;

&lt;p&gt;A Pod template&lt;/p&gt;

&lt;p&gt;The selector can use both:&lt;/p&gt;

&lt;p&gt;Match Labels: Basic equality checks.&lt;/p&gt;

&lt;p&gt;Match Expressions: Complex logical conditions (e.g., In, NotIn, Exists, DoesNotExist).&lt;/p&gt;

&lt;p&gt;Example use cases include maintaining web servers or API Pods with specific label criteria.&lt;/p&gt;

&lt;p&gt;Replication Controller vs. ReplicaSet&lt;br&gt;
Feature Replication Controller (RC) ReplicaSet (RS)&lt;br&gt;
Definition  The original replication mechanism in Kubernetes.   The modern replacement that extends RC functionality.&lt;br&gt;
API Version v1  apps/v1&lt;br&gt;
Selector Type   Supports equality-based selectors only. Supports set-based selectors (matchLabels, matchExpressions).&lt;br&gt;
Rolling Updates Supports the rolling-update command.    Does not support the rolling-update command directly.&lt;br&gt;
Replacement Status  Deprecated in favor of ReplicaSets. Recommended for use, often managed via Deployments.&lt;br&gt;
Usage Recommendation    Legacy workloads only.  Use Deployments (which internally manage ReplicaSets).&lt;br&gt;
Summary&lt;br&gt;
While both Replication Controller and ReplicaSet ensure Pod availability, ReplicaSets provide more flexibility, richer selector options, and better integration with modern Kubernetes workflows.&lt;/p&gt;

&lt;p&gt;In current Kubernetes best practices:&lt;/p&gt;

&lt;p&gt;Avoid using Replication Controllers.&lt;/p&gt;

&lt;p&gt;Use Deployments, which manage ReplicaSets automatically and add capabilities like rolling updates and rollbacks.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>azure</category>
      <category>aws</category>
    </item>
    <item>
      <title>Kubernetes – Replication Controller</title>
      <dc:creator>Naveen Jayachandran</dc:creator>
      <pubDate>Mon, 03 Nov 2025 17:26:05 +0000</pubDate>
      <link>https://forem.com/naveen_jayachandran/kubernetes-replication-controller-3c00</link>
      <guid>https://forem.com/naveen_jayachandran/kubernetes-replication-controller-3c00</guid>
      <description>&lt;p&gt;A Replication Controller (RC) in Kubernetes is a core component responsible for ensuring that a specified number of Pod replicas are running at all times. Similar to a ReplicaSet, its primary function is to maintain the desired number of identical Pods — automatically creating or terminating them as necessary. This ensures high availability, fault tolerance, and scalability within a Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Core Responsibilities of a Replication Controller&lt;br&gt;
Ensuring Availability If a Pod managed by an RC fails, is deleted, or the node hosting it crashes, the RC will automatically create a new Pod to replace it, maintaining the desired replica count.&lt;/p&gt;

&lt;p&gt;Scaling The number of running Pods can be increased or decreased simply by updating the replicas field in the RC’s configuration.&lt;/p&gt;

&lt;p&gt;Load Balancing When used with a Kubernetes Service, the RC ensures that traffic is evenly distributed across all active Pods, promoting efficient resource utilization.&lt;/p&gt;

&lt;p&gt;Example: Running a Replication Controller&lt;br&gt;
Step 1: Create the Definition File&lt;br&gt;
Create a file named rc-definition.yaml with the following content:&lt;/p&gt;

&lt;p&gt;apiVersion: v1&lt;br&gt;
kind: ReplicationController&lt;br&gt;
metadata:&lt;br&gt;
  name: myapp-rc&lt;br&gt;
spec:&lt;br&gt;
  replicas: 3&lt;br&gt;
  selector:&lt;br&gt;
    app: myapp&lt;br&gt;
  template:&lt;br&gt;
    metadata:&lt;br&gt;
      labels:&lt;br&gt;
        app: myapp&lt;br&gt;
    spec:&lt;br&gt;
      containers:&lt;br&gt;
      - name: nginx&lt;br&gt;
        image: nginx&lt;br&gt;
Explanation of the Manifest&lt;br&gt;
apiVersion: v1 – Indicates that the Replication Controller belongs to the core Kubernetes API group.&lt;/p&gt;

&lt;p&gt;kind: ReplicationController – Specifies the type of Kubernetes resource.&lt;/p&gt;

&lt;p&gt;metadata.name – Defines the name of the Replication Controller (myapp-rc).&lt;/p&gt;

&lt;p&gt;spec.replicas: 3 – Sets the desired number of running Pods.&lt;/p&gt;

&lt;p&gt;spec.selector – Uses equality-based selectors to manage Pods labeled app: myapp.&lt;/p&gt;

&lt;p&gt;spec.template – Describes the Pod template used to create new replicas; the labels here must match the selector.&lt;/p&gt;

&lt;p&gt;Step 2: Create the Replication Controller&lt;br&gt;
kubectl apply -f rc-definition.yaml&lt;br&gt;
Step 3: Verify Creation&lt;br&gt;
kubectl get rc myapp-rc&lt;br&gt;
Example Output:&lt;/p&gt;

&lt;p&gt;NAME       DESIRED   CURRENT   READY   AGE&lt;br&gt;
myapp-rc   3         3         3       15s&lt;/p&gt;

&lt;p&gt;kubectl get pods&lt;br&gt;
Example Output:&lt;/p&gt;

&lt;p&gt;NAME             READY   STATUS    RESTARTS   AGE&lt;br&gt;
myapp-rc-5j2x7   1/1     Running   0          25s&lt;br&gt;
myapp-rc-8b9vj   1/1     Running   0          25s&lt;br&gt;
myapp-rc-h4wz8   1/1     Running   0          25s&lt;br&gt;
Step 4: Test Self-Healing&lt;br&gt;
Delete one of the Pods and watch Kubernetes recreate it automatically:&lt;/p&gt;

&lt;p&gt;kubectl delete pod &lt;br&gt;
Step 5: Scale the Replicas&lt;br&gt;
To increase the number of replicas:&lt;/p&gt;

&lt;p&gt;kubectl scale rc myapp-rc --replicas=5&lt;br&gt;
Step 6: Clean Up&lt;br&gt;
When done, delete the Replication Controller and its Pods:&lt;/p&gt;

&lt;p&gt;kubectl delete rc myapp-rc&lt;br&gt;
Labels in a Replication Controller&lt;br&gt;
Labels are key-value pairs attached to Kubernetes resources. In Replication Controllers, labels play an essential role in identifying, organizing, and grouping Pods. They allow administrators to control Pod scheduling and management efficiently.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;selector:&lt;br&gt;
  app: nginx&lt;br&gt;
You can also use multiple labels in a selector, separated by commas:&lt;/p&gt;

&lt;p&gt;selector:&lt;br&gt;
  app: webapp&lt;br&gt;
  tier: frontend&lt;br&gt;
Pod Selector&lt;br&gt;
A Pod selector matches Pods based on their labels. It works on equality-based or set-based expressions and is used across various Kubernetes objects such as:&lt;/p&gt;

&lt;p&gt;ReplicationControllers&lt;/p&gt;

&lt;p&gt;ReplicaSets&lt;/p&gt;

&lt;p&gt;Deployments&lt;/p&gt;

&lt;p&gt;DaemonSets&lt;/p&gt;

&lt;p&gt;This mechanism allows controllers to monitor and manage specific groups of Pods dynamically.&lt;/p&gt;

&lt;p&gt;Responsibilities of a Replication Controller&lt;br&gt;
Ensures that the number of running Pods always matches the desired count.&lt;/p&gt;

&lt;p&gt;Creates new Pods if the running count falls short of the desired replicas.&lt;/p&gt;

&lt;p&gt;Deletes excess Pods if more than the desired count are running.&lt;/p&gt;

&lt;p&gt;Monitors Pod health continuously and replaces any failed Pods automatically.&lt;/p&gt;

&lt;p&gt;Guarantees that the application remains available and resilient against failures.&lt;/p&gt;

&lt;p&gt;Replication Controller vs ReplicaSet&lt;br&gt;
Feature Replication Controller  ReplicaSet&lt;br&gt;
Purpose Ensures the desired number of Pods are running. Same purpose, but a more advanced and flexible controller.&lt;br&gt;
Selector Type   Supports equality-based selectors only. Supports set-based selectors (more expressive).&lt;br&gt;
Pod Association Uses labels to associate Pods.  Uses label selectors to associate Pods.&lt;br&gt;
Generation  Original replication mechanism in Kubernetes.   Next-generation replacement for ReplicationController.&lt;br&gt;
Usage Recommendation    Deprecated in favor of ReplicaSets and Deployments. Preferred for modern Kubernetes deployments.&lt;br&gt;
Summary&lt;br&gt;
The Replication Controller is one of the earliest mechanisms in Kubernetes for ensuring application availability and scaling. It continuously monitors and maintains the desired number of Pods, providing self-healing, scalability, and load distribution.&lt;br&gt;
However, in modern Kubernetes deployments, ReplicaSets (often managed by Deployments) have largely replaced Replication Controllers, offering greater flexibility and advanced selector capabilities.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>azure</category>
      <category>aws</category>
    </item>
    <item>
      <title>Mastering Kubernetes Deployment Strategies: The Real-World Guide for DevOps, Cloud, and SRE Engineers</title>
      <dc:creator>Naveen Jayachandran</dc:creator>
      <pubDate>Mon, 03 Nov 2025 17:25:19 +0000</pubDate>
      <link>https://forem.com/naveen_jayachandran/mastering-kubernetes-deployment-strategies-the-real-world-guide-for-devops-cloud-and-sre-2266</link>
      <guid>https://forem.com/naveen_jayachandran/mastering-kubernetes-deployment-strategies-the-real-world-guide-for-devops-cloud-and-sre-2266</guid>
      <description>&lt;p&gt;In today’s rapidly evolving DevOps landscape, Kubernetes has become the engine powering modern, scalable infrastructure. Whether you’re preparing for a DevOps, Cloud Engineer, or SRE interview, or managing large-scale systems in production, understanding Kubernetes deployment strategies is a must-have skill.&lt;/p&gt;

&lt;p&gt;Because here’s the truth:&lt;br&gt;
In production environments, simply replacing containers is a recipe for disaster. It can trigger service downtime, bug exposure, or even complete outages — all of which can damage customer trust and brand reputation.&lt;/p&gt;

&lt;p&gt;That’s why seasoned engineers rely on well-defined deployment strategies — controlled, testable, and reversible methods to roll out new versions safely.&lt;/p&gt;

&lt;p&gt;Why Deployment Strategies Matter&lt;br&gt;
A deployment strategy defines how new application versions are released and how they interact with existing versions during the rollout. In DevOps and Kubernetes contexts, the right deployment approach ensures:&lt;/p&gt;

&lt;p&gt;🔹 Minimal downtime and consistent user experience&lt;/p&gt;

&lt;p&gt;🔹 Safe feature validation before full rollout&lt;/p&gt;

&lt;p&gt;🔹 Quick rollback mechanisms in case of production failures&lt;/p&gt;

&lt;p&gt;🔹 Controlled experimentation using real-world traffic&lt;/p&gt;

&lt;p&gt;🔹 Confidence in automated delivery pipelines&lt;/p&gt;

&lt;p&gt;Essentially, these strategies form the safety net between innovation and reliability — enabling continuous delivery without compromising stability.&lt;/p&gt;

&lt;p&gt;The Six Key Kubernetes Deployment Strategies&lt;br&gt;
In this detailed guide, we’ll dive into six production-grade deployment strategies every DevOps engineer must know, along with their real-world trade-offs, use cases, and scenario-based interview examples that will help you stand out.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Canary Deployment – The "Gradual Rollout"
What It Is
The Canary deployment introduces a new version (V2) to a small subset of users while the majority continue using the stable version (V1). If metrics, logs, and monitoring results show healthy behavior, traffic to the new version is gradually increased until full rollout.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When to Use It&lt;br&gt;
Introducing new or risky features&lt;/p&gt;

&lt;p&gt;Deploying critical infrastructure changes&lt;/p&gt;

&lt;p&gt;Wanting to validate performance in live production&lt;/p&gt;

&lt;p&gt;Pros&lt;br&gt;
Limits user impact if something fails&lt;/p&gt;

&lt;p&gt;Enables real-world A/B validation&lt;/p&gt;

&lt;p&gt;Integrates well with metrics-driven automation&lt;/p&gt;

&lt;p&gt;Cons&lt;br&gt;
Requires traffic routing control (e.g., Istio, NGINX, or service mesh)&lt;/p&gt;

&lt;p&gt;Complex configuration for progressive rollout&lt;/p&gt;

&lt;p&gt;Real-World Example&lt;br&gt;
Imagine an e-commerce platform releasing a new ML-based recommendation engine. Instead of exposing it to all users, the company deploys it to 5% of traffic. Observability tools (Prometheus, Grafana) monitor accuracy, response time, and user conversions before a full rollout.&lt;/p&gt;

&lt;p&gt;Interview Scenario&lt;br&gt;
Answer:&lt;br&gt;
I’d implement a Canary deployment, routing a small percentage of live traffic to the new model (V2) while most users continue with V1. Using metrics and logging (via Prometheus and Grafana), I’d assess performance. If stable, I’d gradually increase traffic until full adoption. This approach ensures minimal risk and easy rollback.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Blue-Green Deployment – The "Big Switch"
What It Is
In Blue-Green deployments, two environments exist simultaneously:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Blue: The live (current) production environment.&lt;/p&gt;

&lt;p&gt;Green: The new version waiting to go live.&lt;/p&gt;

&lt;p&gt;Once testing confirms the new version’s stability, traffic is switched entirely from Blue to Green.&lt;/p&gt;

&lt;p&gt;When to Use It&lt;br&gt;
When zero downtime is mandatory&lt;/p&gt;

&lt;p&gt;For major version upgrades or high-visibility releases&lt;/p&gt;

&lt;p&gt;In environments that support dual infrastructure&lt;/p&gt;

&lt;p&gt;Pros&lt;br&gt;
Instant rollback by reverting traffic to Blue&lt;/p&gt;

&lt;p&gt;Clear separation between environments&lt;/p&gt;

&lt;p&gt;Simple release management&lt;/p&gt;

&lt;p&gt;Cons&lt;br&gt;
Doubles resource requirements temporarily&lt;/p&gt;

&lt;p&gt;Needs traffic management control (e.g., load balancers)&lt;/p&gt;

&lt;p&gt;Real-World Example&lt;br&gt;
A fintech platform scheduled a midnight rollout for a regulatory compliance update. By deploying the new version in the Green environment ahead of time and switching the load balancer during the maintenance window, they ensured a zero-downtime launch.&lt;/p&gt;

&lt;p&gt;Interview Scenario&lt;br&gt;
Answer:&lt;br&gt;
I’d use a Blue-Green deployment. I’d deploy the new version in a parallel Green environment, perform pre-release testing, and switch traffic via the load balancer at launch time. If issues appear, I’d revert to the Blue version immediately, ensuring uninterrupted service.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A/B Testing Deployment – The "Data-Driven Experiment"
What It Is
Unlike Canary deployments (which focus on performance validation), A/B testing routes user segments to different application versions based on user attributes (e.g., location, device, or random assignment). It’s primarily a product and UX strategy rather than purely operational.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When to Use It&lt;br&gt;
For UI/UX experiments&lt;/p&gt;

&lt;p&gt;To validate feature effectiveness&lt;/p&gt;

&lt;p&gt;When data-driven decision-making is required&lt;/p&gt;

&lt;p&gt;Pros&lt;br&gt;
Enables measurable user behavior comparisons&lt;/p&gt;

&lt;p&gt;Supports data-backed feature promotion&lt;/p&gt;

&lt;p&gt;Cons&lt;br&gt;
Requires analytics and telemetry setup&lt;/p&gt;

&lt;p&gt;More complex traffic segmentation&lt;/p&gt;

&lt;p&gt;Not ideal for backend-only updates&lt;/p&gt;

&lt;p&gt;Real-World Example&lt;br&gt;
A streaming platform tests two versions of its recommendation UI: one showing horizontal carousels, another using vertical lists. Traffic is split 50/50, and metrics like user engagement and watch time determine which design performs better.&lt;/p&gt;

&lt;p&gt;Interview Scenario&lt;br&gt;
Answer:&lt;br&gt;
I’d go with A/B Testing. It lets me expose two different UI versions to subsets of users and collect real-time metrics like completion and retention rates. Based on results, I’d promote the best-performing version to production.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Rolling Update – The "Smooth Transition"
What It Is
Rolling updates are Kubernetes’ default deployment method. Pods running the old version (V1) are replaced incrementally by new pods (V2), ensuring that some old pods always remain available during the transition.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When to Use It&lt;br&gt;
For routine updates requiring continuous availability&lt;/p&gt;

&lt;p&gt;When backward compatibility between versions exists&lt;/p&gt;

&lt;p&gt;Pros&lt;br&gt;
No downtime&lt;/p&gt;

&lt;p&gt;Fully automated in Kubernetes&lt;/p&gt;

&lt;p&gt;Simple rollback with deployment history&lt;/p&gt;

&lt;p&gt;Cons&lt;br&gt;
Slightly slower rollout&lt;/p&gt;

&lt;p&gt;Risky if database schema changes are not compatible&lt;/p&gt;

&lt;p&gt;Real-World Example&lt;br&gt;
A SaaS company updates its payment service microservice with enhanced retry logic. A Rolling Update ensures that only one pod is replaced at a time, maintaining seamless service continuity across the cluster.&lt;/p&gt;

&lt;p&gt;Interview Scenario&lt;br&gt;
Answer:&lt;br&gt;
A Rolling Update suits this best. Kubernetes ensures new pods are created and healthy before terminating old ones. This keeps service disruption minimal and allows for a safe rollback via deployment history if issues occur.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Recreate Deployment – The "Wipe and Replace"
What It Is
In the Recreate strategy, all old pods are terminated before deploying new ones. It’s straightforward but causes temporary downtime.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When to Use It&lt;br&gt;
For non-critical services&lt;/p&gt;

&lt;p&gt;In development or staging environments&lt;/p&gt;

&lt;p&gt;When downtime is acceptable&lt;/p&gt;

&lt;p&gt;Pros&lt;br&gt;
Simplest to configure and manage&lt;/p&gt;

&lt;p&gt;Minimal infrastructure cost&lt;/p&gt;

&lt;p&gt;Cons&lt;br&gt;
Causes downtime&lt;/p&gt;

&lt;p&gt;Not suitable for user-facing or mission-critical systems&lt;/p&gt;

&lt;p&gt;Real-World Example&lt;br&gt;
An internal DevOps monitoring dashboard is updated during off-hours. Using a Recreate deployment, engineers shut down the old version, deploy the new one, and verify functionality — simple and efficient.&lt;/p&gt;

&lt;p&gt;Interview Scenario&lt;br&gt;
Answer:&lt;br&gt;
I’d choose Recreate. It’s straightforward and resource-efficient, ideal for internal or non-critical apps. Since downtime is acceptable, we can afford the brief outage while deploying a new version cleanly.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Shadow Deployment – The "Silent Test"
What It Is
In Shadow deployments, live production traffic is mirrored to a new version (V2) while users continue interacting only with the stable version (V1). The new version processes the requests but doesn’t return responses to end users.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When to Use It&lt;br&gt;
For load testing under real production traffic&lt;/p&gt;

&lt;p&gt;During architecture rewrites or migrations&lt;/p&gt;

&lt;p&gt;When you want zero user impact validation&lt;/p&gt;

&lt;p&gt;Pros&lt;br&gt;
Safely tests under real-world load&lt;/p&gt;

&lt;p&gt;Identifies performance bottlenecks early&lt;/p&gt;

&lt;p&gt;No risk to end users&lt;/p&gt;

&lt;p&gt;Cons&lt;br&gt;
High resource utilization (traffic duplication)&lt;/p&gt;

&lt;p&gt;Complex setup and routing configuration&lt;/p&gt;

&lt;p&gt;Real-World Example&lt;br&gt;
A company refactors its monolithic application into microservices. Before the full switch, it mirrors production traffic to the new microservices (V2) using Istio. Engineers observe latency, throughput, and failure rates — ensuring confidence before the live transition.&lt;/p&gt;

&lt;p&gt;Interview Scenario&lt;br&gt;
Answer:&lt;br&gt;
I’d implement a Shadow Deployment. It mirrors live traffic to the new architecture while users still receive responses from the old system. This enables realistic load testing and performance observation without impacting user experience.&lt;/p&gt;

&lt;p&gt;Comparative Summary&lt;br&gt;
Strategy    Downtime    Rollback Ease   Resource Usage  Use Case&lt;br&gt;
Canary  None    Moderate    Medium  Gradual feature rollout&lt;br&gt;
Blue-Green  None    Easy    High    Major, zero-downtime release&lt;br&gt;
A/B Testing None    Manual  High    UX experiments, data-driven validation&lt;br&gt;
Rolling Update  None    Easy    Low Routine production updates&lt;br&gt;
Recreate    Yes N/A Low Non-critical environments&lt;br&gt;
Shadow  None    Complex Very High   Performance testing and architecture validation&lt;br&gt;
Best Practices for Kubernetes Deployments&lt;br&gt;
Automate Rollouts and Rollbacks Use tools like Argo Rollouts or Flagger for progressive delivery automation.&lt;/p&gt;

&lt;p&gt;Integrate Observability Always monitor key metrics (latency, error rates, CPU usage) using Prometheus, Grafana, and ELK stacks.&lt;/p&gt;

&lt;p&gt;Leverage Feature Flags Tools like LaunchDarkly or Unleash decouple deployment from feature release, adding flexibility.&lt;/p&gt;

&lt;p&gt;Test in Production Carefully Adopt Shadow or Canary strategies for high-risk deployments and validate using real traffic.&lt;/p&gt;

&lt;p&gt;Version Your Configurations Use Helm or Kustomize for maintaining multiple deployment configurations safely.&lt;/p&gt;

&lt;p&gt;Secure Your Pipelines Integrate RBAC, image scanning, and admission controllers to ensure compliance and security.&lt;/p&gt;

&lt;p&gt;Plan for Rollback Always design deployments with rollback capability in mind — never deploy blind.&lt;/p&gt;

&lt;p&gt;How to Talk About This in Interviews&lt;br&gt;
Interviewers love when candidates:&lt;/p&gt;

&lt;p&gt;Explain why they’d choose a strategy&lt;/p&gt;

&lt;p&gt;Mention trade-offs and real metrics&lt;/p&gt;

&lt;p&gt;Reference Kubernetes primitives like Deployments, ReplicaSets, and Services&lt;/p&gt;

&lt;p&gt;Mention real tools (e.g., Istio, ArgoCD, Helm, Prometheus)&lt;/p&gt;

&lt;p&gt;Example high-impact answer:&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
Kubernetes deployment strategies aren’t just technical patterns — they’re risk management tools that define how safely, confidently, and efficiently teams deliver innovation.&lt;/p&gt;

&lt;p&gt;Whether you’re deploying a new ML model, refactoring a legacy monolith, or running high-availability APIs, mastering these strategies will make you a stronger engineer and a standout interview candidate.&lt;/p&gt;

&lt;p&gt;Each method — from Canary to Shadow — brings its own balance of speed, safety, and simplicity. The real skill lies in choosing the right one for the right scenario.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>azure</category>
      <category>aws</category>
    </item>
    <item>
      <title>Kubernetes – Creating Multiple Containers in a Pod</title>
      <dc:creator>Naveen Jayachandran</dc:creator>
      <pubDate>Mon, 03 Nov 2025 17:24:18 +0000</pubDate>
      <link>https://forem.com/naveen_jayachandran/kubernetes-creating-multiple-containers-in-a-pod-1l3k</link>
      <guid>https://forem.com/naveen_jayachandran/kubernetes-creating-multiple-containers-in-a-pod-1l3k</guid>
      <description>&lt;p&gt;Introduction&lt;br&gt;
Kubernetes is an open-source container orchestration platform developed by Google in 2014 and written in Golang. It automates container deployment, load balancing, scaling, and management across multiple environments — including physical, virtual, and cloud-based infrastructures.&lt;/p&gt;

&lt;p&gt;All major cloud providers (like AWS, Azure, and GCP) support Kubernetes as a managed service. Kubernetes ensures your containers run efficiently and reliably through its powerful automation and scheduling capabilities.&lt;/p&gt;

&lt;p&gt;Kubernetes Architecture Overview&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Kube-API Server&lt;br&gt;
The API Server is the main entry point to the Kubernetes control plane.&lt;br&gt;
It directly interacts with users through YAML or JSON configuration files and processes requests to manage cluster resources. It acts as the frontend of the control plane.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ETCD&lt;br&gt;
etcd is a consistent, distributed key-value store that maintains the cluster state and metadata.&lt;br&gt;
It ensures high availability and reliability of data across nodes.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Key Features of etcd:&lt;/p&gt;

&lt;p&gt;Secure and fault-tolerant&lt;/p&gt;

&lt;p&gt;Fully replicated&lt;/p&gt;

&lt;p&gt;High performance&lt;/p&gt;

&lt;p&gt;Highly available&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Controller Manager&lt;br&gt;
The Controller Manager continuously ensures that the actual state of the cluster matches the desired state defined in your configuration files. It automates the lifecycle of pods, nodes, and other Kubernetes objects.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kube-Scheduler&lt;br&gt;
The Kube-Scheduler assigns newly created pods to suitable nodes based on resource availability and constraints. It decides where each pod should run within the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pod&lt;br&gt;
A Pod is the smallest deployable unit in Kubernetes. It can contain one or more containers that share the same network namespace (IP address and ports).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Although the best practice is to run one container per pod, sometimes multiple containers are necessary — for example, when they must share storage or communicate directly within the same environment.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Container&lt;br&gt;
A container is the runnable instance of an image with all dependencies bundled together.&lt;br&gt;
It is lightweight, fast to start, and does not require pre-allocated memory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kube-Proxy&lt;br&gt;
Kube-Proxy manages network communication inside and outside the cluster.&lt;br&gt;
It assigns a unique IP address to each pod and handles load balancing and network routing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubelet&lt;br&gt;
The Kubelet is an agent that runs on every node in the cluster.&lt;br&gt;
It receives instructions from the API Server and ensures that containers described in Pod specifications are running and healthy.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Creating Multiple Containers in a Single Pod&lt;br&gt;
Let’s go step-by-step through how to create a pod with multiple containers.&lt;/p&gt;

&lt;p&gt;Step 1: Open Your Kubernetes Environment&lt;br&gt;
Ensure that Kubernetes is properly installed and configured on your machine or cluster.&lt;br&gt;
You should be able to run:&lt;/p&gt;

&lt;p&gt;kubectl version&lt;br&gt;
Step 2: Create a Manifest File&lt;br&gt;
All Kubernetes resources are created using manifest files written in YAML format.&lt;br&gt;
We’ll create one for our multi-container pod.&lt;/p&gt;

&lt;p&gt;Run the following command to create a file:&lt;/p&gt;

&lt;p&gt;vi multicontainer.yml&lt;br&gt;
Press i to enter insert mode and then type the YAML code below.&lt;/p&gt;

&lt;p&gt;YAML Code for Creating Multiple Containers in a Pod&lt;br&gt;
apiVersion: v1&lt;br&gt;
kind: Pod&lt;br&gt;
metadata:&lt;br&gt;
  name: testpod1&lt;br&gt;
spec:&lt;br&gt;
  containers:&lt;br&gt;
    - name: c00&lt;br&gt;
      image: ubuntu&lt;br&gt;
      command: ["/bin/bash", "-c", "while true; do echo Hello-Coder; sleep 5; done"]&lt;br&gt;
    - name: c01&lt;br&gt;
      image: ubuntu&lt;br&gt;
      command: ["/bin/bash", "-c", "while true; do echo Hello-Programmer; sleep 5; done"]&lt;br&gt;
After typing the code:&lt;/p&gt;

&lt;p&gt;Press ESC&lt;/p&gt;

&lt;p&gt;Type :wq and press Enter to save and exit.&lt;/p&gt;

&lt;p&gt;Step 3: Apply the Manifest File&lt;br&gt;
Run the following command to create the pod:&lt;/p&gt;

&lt;p&gt;kubectl apply -f multicontainer.yml&lt;br&gt;
You should see output like:&lt;/p&gt;

&lt;p&gt;pod/testpod1 created&lt;br&gt;
Step 4: Verify Pod Status&lt;br&gt;
To check if the pod is running, execute:&lt;/p&gt;

&lt;p&gt;kubectl get pods&lt;br&gt;
Sample Output:&lt;/p&gt;

&lt;p&gt;NAME        READY   STATUS    RESTARTS   AGE&lt;br&gt;
testpod1    2/2     Running   0          30s&lt;br&gt;
The READY column showing 2/2 confirms that both containers inside the pod are running successfully.&lt;/p&gt;

&lt;p&gt;Step 5: View Container Logs&lt;br&gt;
You can view the logs of each container individually using:&lt;/p&gt;

&lt;p&gt;Logs from Container c00:&lt;br&gt;
kubectl logs -f testpod1 -c c00&lt;br&gt;
Output:&lt;/p&gt;

&lt;p&gt;Hello-Coder&lt;br&gt;
Hello-Coder&lt;br&gt;
...&lt;br&gt;
Logs from Container c01:&lt;br&gt;
kubectl logs -f testpod1 -c c01&lt;br&gt;
Output:&lt;/p&gt;

&lt;p&gt;Hello-Programmer&lt;br&gt;
Hello-Programmer&lt;br&gt;
...&lt;br&gt;
Summary&lt;br&gt;
Pod Name: testpod1&lt;/p&gt;

&lt;p&gt;Container 1: c00 → prints Hello-Coder&lt;/p&gt;

&lt;p&gt;Container 2: c01 → prints Hello-Programmer&lt;/p&gt;

&lt;p&gt;You have successfully created a multi-container pod in Kubernetes.&lt;br&gt;
Although each pod typically contains a single container, multiple containers can coexist in the same pod when they share resources or need close coordination.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>azure</category>
      <category>aws</category>
    </item>
    <item>
      <title>How to Run Shell Commands in Kubernetes Pods or Containers</title>
      <dc:creator>Naveen Jayachandran</dc:creator>
      <pubDate>Mon, 03 Nov 2025 17:23:31 +0000</pubDate>
      <link>https://forem.com/naveen_jayachandran/how-to-run-shell-commands-in-kubernetes-pods-or-containers-25p9</link>
      <guid>https://forem.com/naveen_jayachandran/how-to-run-shell-commands-in-kubernetes-pods-or-containers-25p9</guid>
      <description>&lt;p&gt;Kubernetes—commonly known as K8s—is an open-source container orchestration platform originally developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF). It is designed to automate the deployment, scaling, and management of containerized applications, helping organizations achieve agility and consistency across environments.&lt;/p&gt;

&lt;p&gt;Running shell commands inside Kubernetes Pods or containers is a fundamental skill for developers and DevOps engineers managing applications in Kubernetes clusters. This guide will walk you through how to do it step-by-step, using Minikube for demonstration.&lt;/p&gt;

&lt;p&gt;Setting Up a Kubernetes Cluster&lt;br&gt;
Before executing shell commands, ensure that your Kubernetes cluster is properly set up. You can do this in one of two ways:&lt;/p&gt;

&lt;p&gt;Full Cluster Setup – A production-grade environment with multiple nodes.&lt;/p&gt;

&lt;p&gt;Single-Node Setup with Minikube – A lightweight local Kubernetes environment ideal for testing and development.&lt;/p&gt;

&lt;p&gt;The main difference is that Minikube runs on a single node, making it simpler to configure and manage. You can install Minikube based on your system’s OS and architecture. (For more details, refer to: Setting Up Minikube in Your Local System.)&lt;/p&gt;

&lt;p&gt;Understanding Kubernetes Pods and Containers&lt;br&gt;
In Kubernetes, Pods are the smallest deployable units. A pod encapsulates one or more tightly coupled containers that share the same network namespace and storage volumes.&lt;/p&gt;

&lt;p&gt;Think of a Pod as an abstraction layer on top of containers that adds metadata and orchestration capabilities. It contains the application’s code, libraries, runtime, and dependencies—ensuring consistent behavior across environments.&lt;/p&gt;

&lt;p&gt;Now, let’s learn how to execute shell commands inside a Kubernetes Pod.&lt;/p&gt;

&lt;p&gt;Executing Commands Inside a Kubernetes Pod&lt;br&gt;
In this example, we’ll use Minikube to demonstrate the process.&lt;/p&gt;

&lt;p&gt;Step 1: Start Minikube&lt;br&gt;
Run the following command to start your local Kubernetes cluster:&lt;/p&gt;

&lt;p&gt;minikube start&lt;br&gt;
This initializes and configures a single-node Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Step 2: Check Minikube Status&lt;br&gt;
Verify that Minikube is running:&lt;/p&gt;

&lt;p&gt;minikube status&lt;br&gt;
If everything is working correctly, you should see the cluster components listed as Running.&lt;/p&gt;

&lt;p&gt;Step 3: Create a Pod Using an NGINX Image&lt;br&gt;
Run the following command to create a Pod:&lt;/p&gt;

&lt;p&gt;kubectl run nginx-pod --image=nginx&lt;br&gt;
This command pulls the NGINX image from Docker Hub and creates a pod named nginx-pod.&lt;/p&gt;

&lt;p&gt;Step 4: List All Pods&lt;br&gt;
Confirm that your pod is created and running:&lt;/p&gt;

&lt;p&gt;kubectl get pods&lt;br&gt;
You should see nginx-pod listed with a Running status.&lt;/p&gt;

&lt;p&gt;Step 5: Access the Minikube Environment&lt;br&gt;
Connect to your Minikube instance via SSH:&lt;/p&gt;

&lt;p&gt;minikube ssh&lt;br&gt;
This allows you to interact directly with the underlying virtual machine hosting your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Step 6: List Running Containers&lt;br&gt;
Inside the Minikube environment, list all containers:&lt;/p&gt;

&lt;p&gt;docker ps -a&lt;br&gt;
You’ll see containers corresponding to your Kubernetes pods, including the NGINX container.&lt;/p&gt;

&lt;p&gt;Step 7: Access the Container Shell&lt;br&gt;
To enter the NGINX container, use:&lt;/p&gt;

&lt;p&gt;docker exec -it  /bin/bash&lt;br&gt;
Once inside, you can run Linux shell commands such as:&lt;/p&gt;

&lt;p&gt;ls&lt;br&gt;
pwd&lt;br&gt;
cd /usr/share/nginx/html&lt;br&gt;
You are now working directly within the containerized environment of your Kubernetes pod.&lt;/p&gt;

&lt;p&gt;Alternative: Using kubectl exec (Recommended)&lt;br&gt;
Instead of SSHing into Minikube and using Docker, you can directly execute commands in your pod from your local terminal:&lt;/p&gt;

&lt;p&gt;kubectl exec -it nginx-pod -- /bin/bash&lt;br&gt;
This command opens an interactive shell inside the nginx-pod container without requiring Docker commands.&lt;/p&gt;

&lt;p&gt;Best Practices for Running Shell Commands in Kubernetes&lt;br&gt;
Use kubectl exec Carefully Always specify the exact pod and container when executing commands to avoid unintentional changes.&lt;/p&gt;

&lt;p&gt;Use the -c Flag for Multi-Container Pods When a pod has multiple containers, access a specific one by adding the -c flag:kubectl exec -it  -c  -- /bin/bash&lt;/p&gt;

&lt;p&gt;Execute Sequential Commands with &amp;amp;&amp;amp; For running multiple commands in sequence:kubectl exec -it nginx-pod -- sh -c "cd /usr/share/nginx/html &amp;amp;&amp;amp; ls &amp;amp;&amp;amp; cat index.html"&lt;/p&gt;

&lt;p&gt;Debug with kubectl debug Launch a debugging session in a running pod:kubectl debug -it nginx-pod --image=busybox This is useful for troubleshooting live workloads.&lt;/p&gt;

&lt;p&gt;Use Non-Interactive Execution for Automation For scripts or CI/CD pipelines, run non-interactive commands:kubectl exec nginx-pod --stdin=false --tty=false -- ls /var/log/nginx&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
Understanding how to run shell commands inside Kubernetes Pods is essential for managing, debugging, and maintaining containerized applications.&lt;/p&gt;

&lt;p&gt;Using tools like Minikube simplifies learning and experimentation, while kubectl exec provides a direct and efficient way to interact with your containers.&lt;/p&gt;

&lt;p&gt;Mastering these techniques will help developers and DevOps engineers manage Kubernetes environments with greater confidence, agility, and control—boosting both development speed and operational reliability.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>azure</category>
      <category>aws</category>
    </item>
    <item>
      <title>Kubernetes – Node</title>
      <dc:creator>Naveen Jayachandran</dc:creator>
      <pubDate>Mon, 03 Nov 2025 17:22:34 +0000</pubDate>
      <link>https://forem.com/naveen_jayachandran/kubernetes-node-3764</link>
      <guid>https://forem.com/naveen_jayachandran/kubernetes-node-3764</guid>
      <description>&lt;p&gt;Kubernetes Nodes are the actual worker or master machines where the real execution of workloads takes place. Each node runs essential services to operate Pods and is managed by the Kubernetes Control Plane. A single Kubernetes node can host multiple Pods, and each Pod can run one or more containers.&lt;/p&gt;

&lt;p&gt;There are three key processes that operate on every node to schedule and manage Pods:&lt;/p&gt;

&lt;p&gt;Container Runtime: The engine that runs containers inside Pods. Example: Docker, containerd, CRI-O.&lt;/p&gt;

&lt;p&gt;Kubelet: The agent responsible for communicating with the Control Plane and the container runtime. It ensures Pods are running as defined in the PodSpec.&lt;/p&gt;

&lt;p&gt;Kube-Proxy: Handles networking for Pods by forwarding requests from Kubernetes Services to the correct Pods on the node.&lt;/p&gt;

&lt;p&gt;What is Kubernetes?&lt;br&gt;
Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. Originally developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF), Kubernetes is written in Go (Golang) and supports deployment on public, private, and hybrid cloud environments.&lt;/p&gt;

&lt;p&gt;Kubernetes enables organizations to manage a large number of containers as a single logical unit, simplifying scaling and fault tolerance across infrastructure.&lt;/p&gt;

&lt;p&gt;What is a Kubernetes Node?&lt;br&gt;
A Kubernetes Node is a physical or virtual machine that runs containerized workloads. Each node contains:&lt;/p&gt;

&lt;p&gt;Kubelet – communicates with the Control Plane and ensures containers are running correctly.&lt;/p&gt;

&lt;p&gt;Container Runtime – executes the containers (e.g., Docker, containerd).&lt;/p&gt;

&lt;p&gt;Kube-Proxy – manages network rules and service discovery to route traffic efficiently.&lt;/p&gt;

&lt;p&gt;Nodes collectively form the computing capacity of a Kubernetes cluster, where actual application workloads run.&lt;/p&gt;

&lt;p&gt;How Does a Kubernetes Pod Work?&lt;br&gt;
A Pod is the smallest deployable unit in Kubernetes — often compared to a process in a traditional OS.&lt;/p&gt;

&lt;p&gt;Each Pod encapsulates one or more tightly coupled containers that share:&lt;/p&gt;

&lt;p&gt;Storage (volumes)&lt;/p&gt;

&lt;p&gt;Networking (same IP and port space)&lt;/p&gt;

&lt;p&gt;Configuration data&lt;/p&gt;

&lt;p&gt;Pods are ephemeral by design. If a Pod fails, Kubernetes automatically replaces it with a new replica to maintain desired application state and availability.&lt;/p&gt;

&lt;p&gt;How Does a Kubernetes Node Work?&lt;br&gt;
Nodes provide the execution environment for Pods. Kubernetes distinguishes between two types of nodes:&lt;/p&gt;

&lt;p&gt;Master Node (Control Plane): Manages cluster operations such as scheduling, scaling, and maintaining cluster state.&lt;/p&gt;

&lt;p&gt;Worker Node: Executes application Pods and reports status to the Control Plane.&lt;/p&gt;

&lt;p&gt;A cluster can have any number of worker nodes, and it’s best practice to have at least two master nodes for high availability and failover capability.&lt;/p&gt;

&lt;p&gt;Kubernetes Node Name Uniqueness&lt;br&gt;
Each node in a Kubernetes cluster must have a unique name. Duplicate node names lead to inconsistencies and conflicts in object references, metadata, and state tracking.&lt;/p&gt;

&lt;p&gt;If two nodes share the same name, the Kubernetes scheduler cannot reliably distinguish between them, which may result in misdirected workloads or volume attachments.&lt;/p&gt;

&lt;p&gt;Kubernetes Nodes Not Ready&lt;br&gt;
To view nodes and their status:&lt;/p&gt;

&lt;p&gt;kubectl get nodes&lt;br&gt;
Node statuses include:&lt;/p&gt;

&lt;p&gt;Ready: Node is healthy and available for scheduling Pods.&lt;/p&gt;

&lt;p&gt;NotReady: Node is currently unhealthy (could be due to network failure, kubelet crash, or pod issues).&lt;/p&gt;

&lt;p&gt;Unknown: Node is not responding to the Control Plane (communication timeout or network issue).&lt;/p&gt;

&lt;p&gt;Self-registration of Kubernetes Nodes&lt;br&gt;
Nodes must be registered with the Kubernetes API server before the Control Plane can schedule Pods onto them.&lt;/p&gt;

&lt;p&gt;By default, nodes self-register using the kubelet process. Kubelet communicates with the API server and automatically creates a Node object in the cluster.&lt;/p&gt;

&lt;p&gt;Options for Self-registration&lt;br&gt;
Kubeconfig Access: Provide kubelet with a kubeconfig file path for API server authentication.&lt;/p&gt;

&lt;p&gt;Register Node Flag: The kubelet flag --register-node=true (default) allows automatic registration with the API server, which then creates the Node object used by the scheduler.&lt;/p&gt;

&lt;p&gt;Manual Kubernetes Node Administration&lt;br&gt;
If self-registration is disabled, nodes can be manually registered or removed:&lt;/p&gt;

&lt;p&gt;kubectl create node &lt;br&gt;
kubectl delete node &lt;br&gt;
When creating a Node object manually, include name, labels, and taints in the YAML file.&lt;br&gt;
Labels and taints help control Pod scheduling to ensure workloads are deployed only on compatible nodes.&lt;/p&gt;

&lt;p&gt;Kubernetes Node Status&lt;br&gt;
To describe a node in detail:&lt;/p&gt;

&lt;p&gt;kubectl describe node &lt;br&gt;
A healthy node includes conditions similar to:&lt;/p&gt;

&lt;p&gt;"conditions": [&lt;br&gt;
  {&lt;br&gt;
    "type": "Ready",&lt;br&gt;
    "status": "True",&lt;br&gt;
    "reason": "KubeletReady",&lt;br&gt;
    "message": "kubelet is posting ready status"&lt;br&gt;
  }&lt;br&gt;
]&lt;br&gt;
Kubernetes Node Controller&lt;br&gt;
The Node Controller monitors node health and manages node lifecycle.&lt;br&gt;
If --register-node=true, nodes are automatically registered.&lt;br&gt;
For manual setups, use --register-node=false to disable auto-registration.&lt;/p&gt;

&lt;p&gt;Resource Capacity Tracking&lt;br&gt;
When a node registers, it reports its resource capacity to the API server, including:&lt;/p&gt;

&lt;p&gt;CPU cores&lt;/p&gt;

&lt;p&gt;Memory&lt;/p&gt;

&lt;p&gt;Ephemeral storage&lt;/p&gt;

&lt;p&gt;Persistent volumes&lt;/p&gt;

&lt;p&gt;The scheduler uses this data to ensure that Pods are placed only on nodes with sufficient available resources.&lt;/p&gt;

&lt;p&gt;Kubernetes Node Topology&lt;br&gt;
Node topology allows controlling how Pods are distributed across nodes and zones for performance, resilience, and affinity requirements.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;apiVersion: v1&lt;br&gt;
kind: Pod&lt;br&gt;
metadata:&lt;br&gt;
  name: my-pod&lt;br&gt;
spec:&lt;br&gt;
  containers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: my-container
image: nginx
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
Here, the Pod is scheduled specifically to nodes in the us-east-1a availability zone.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Graceful Node Shutdown&lt;br&gt;
A graceful shutdown allows running Pods to complete their tasks and save their state before termination.&lt;/p&gt;

&lt;p&gt;Benefits:&lt;/p&gt;

&lt;p&gt;Prevents data loss or corruption&lt;/p&gt;

&lt;p&gt;Allows stateful applications to persist important data&lt;/p&gt;

&lt;p&gt;Maintains high availability and minimizes downtime&lt;/p&gt;

&lt;p&gt;Ensures predictable, stable system behavior&lt;/p&gt;

&lt;p&gt;If a Pod does not terminate within the grace period, it is forcefully stopped.&lt;/p&gt;

&lt;p&gt;Non-Graceful Node Shutdown Handling&lt;br&gt;
A non-graceful shutdown (e.g., power failure) terminates Pods abruptly, without kubelet notification.&lt;/p&gt;

&lt;p&gt;Consequences include:&lt;/p&gt;

&lt;p&gt;Loss of in-memory data or unsaved state&lt;/p&gt;

&lt;p&gt;Stateful applications entering “Terminating” status indefinitely&lt;/p&gt;

&lt;p&gt;Scheduler unable to recreate replacements until node recovery&lt;/p&gt;

&lt;p&gt;This is why ensuring graceful shutdowns is critical in production clusters.&lt;/p&gt;

&lt;p&gt;Kubernetes Nodes vs Kubernetes Pods&lt;br&gt;
Aspect  Kubernetes Nodes    Kubernetes Pods&lt;br&gt;
Definition  Physical or virtual machines that run workloads Smallest deployable unit that runs containers&lt;br&gt;
Function    Host one or more Pods   Host one or more containers&lt;br&gt;
Resources   Provide CPU, memory, and storage    Consume resources from the Node&lt;br&gt;
Responsibility  Managed by Control Plane    Managed by Scheduler and Controller Manager&lt;br&gt;
Role    Run Kubernetes workloads    Execute application containers&lt;br&gt;
Managing Kubernetes Nodes&lt;br&gt;
Node management includes:&lt;/p&gt;

&lt;p&gt;Provisioning &amp;amp; Deployment – Adding new nodes to increase capacity&lt;/p&gt;

&lt;p&gt;Maintenance &amp;amp; Upgrades – Applying patches, updates, or reconfigurations&lt;/p&gt;

&lt;p&gt;Scaling – Automatically or manually adding/removing nodes for optimal performance and availability&lt;/p&gt;

&lt;p&gt;Optimizing Kubernetes Node Performance&lt;br&gt;
Performance optimization ensures efficient use of compute resources and maintains cluster stability.&lt;/p&gt;

&lt;p&gt;Key focus areas:&lt;/p&gt;

&lt;p&gt;Scheduling strategy&lt;/p&gt;

&lt;p&gt;Resource allocation&lt;/p&gt;

&lt;p&gt;Container runtime tuning&lt;/p&gt;

&lt;p&gt;Resource Utilization Optimization&lt;br&gt;
Container Packing: Maximize utilization by co-locating compatible workloads.&lt;/p&gt;

&lt;p&gt;Resource Requests &amp;amp; Limits: Define fair resource allocations to prevent contention.&lt;/p&gt;

&lt;p&gt;Eviction Policies: Gracefully evict Pods when nodes run low on resources.&lt;/p&gt;

&lt;p&gt;Resource Monitoring: Continuously track usage to detect inefficiencies and bottlenecks.&lt;/p&gt;

&lt;p&gt;Scheduling Strategies&lt;br&gt;
Node Affinity / Anti-Affinity: Schedule Pods on or away from specific nodes based on labels.&lt;/p&gt;

&lt;p&gt;Workload-Aware Scheduling: Place Pods based on performance characteristics and resource needs.&lt;/p&gt;

&lt;p&gt;Dynamic Scheduling: Continuously rebalance workloads as cluster conditions change.&lt;/p&gt;

&lt;p&gt;Container Runtime Tuning&lt;br&gt;
Configuration Optimization: Adjust runtime settings (e.g., Docker, containerd) for better efficiency.&lt;/p&gt;

&lt;p&gt;Image Optimization: Use minimal, lightweight images to reduce start times.&lt;/p&gt;

&lt;p&gt;Regular Updates: Keep runtime versions current for performance and security improvements.&lt;/p&gt;

&lt;p&gt;Memory Management: Fine-tune container memory allocation for consistent performance.&lt;/p&gt;

&lt;p&gt;Securing Kubernetes Nodes&lt;br&gt;
Node security is essential to protect workloads from unauthorized access and vulnerabilities. Key practices include:&lt;/p&gt;

&lt;p&gt;Node Hardening &amp;amp; Patch Management&lt;/p&gt;

&lt;p&gt;Network Policies and Access Controls&lt;/p&gt;

&lt;p&gt;Container Runtime Security (sandboxing, scanning, non-root execution)&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>azure</category>
      <category>aws</category>
    </item>
    <item>
      <title>Kubernetes Namespaces</title>
      <dc:creator>Naveen Jayachandran</dc:creator>
      <pubDate>Mon, 03 Nov 2025 17:21:20 +0000</pubDate>
      <link>https://forem.com/naveen_jayachandran/kubernetes-namespaces-1jpa</link>
      <guid>https://forem.com/naveen_jayachandran/kubernetes-namespaces-1jpa</guid>
      <description>&lt;p&gt;In Kubernetes, Namespaces provide a logical way to isolate and organize groups of resources within a single cluster. They are especially useful in environments where multiple teams or projects share the same cluster, allowing separation and management of resources independently.&lt;/p&gt;

&lt;p&gt;Each resource in a Namespace must have a unique name, but resources across different Namespaces can share the same name without conflict.&lt;/p&gt;

&lt;p&gt;Default Kubernetes Namespaces&lt;br&gt;
When you create a Kubernetes cluster, it comes with four built-in Namespaces:&lt;/p&gt;

&lt;p&gt;default&lt;/p&gt;

&lt;p&gt;kube-node-lease&lt;/p&gt;

&lt;p&gt;kube-public&lt;/p&gt;

&lt;p&gt;kube-system&lt;/p&gt;

&lt;p&gt;To view all available Namespaces, use:&lt;/p&gt;

&lt;p&gt;kubectl get namespaces&lt;br&gt;
You’ll see the four default Namespaces listed.&lt;/p&gt;

&lt;p&gt;Bonus: Kubernetes Dashboard Namespace (Minikube Specific)&lt;br&gt;
When you install Minikube, an additional Namespace called kubernetes-dashboard is automatically created.&lt;br&gt;
This Namespace is specific to Minikube and not present in standard Kubernetes clusters.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;kube-system
The kube-system Namespace contains core system components and processes managed by Kubernetes itself.
It includes system-level Pods like:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;kube-dns&lt;/p&gt;

&lt;p&gt;kube-proxy&lt;/p&gt;

&lt;p&gt;etcd&lt;/p&gt;

&lt;p&gt;kube-apiserver&lt;/p&gt;

&lt;p&gt;This Namespace is not meant for user-created workloads. Developers should avoid creating or modifying resources in kube-system.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;kube-public
The kube-public Namespace holds publicly accessible cluster information.
It contains a ConfigMap that stores details about the cluster, which can be accessed even without authentication.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To view this information:&lt;/p&gt;

&lt;p&gt;kubectl cluster-info&lt;br&gt;
This command retrieves data stored in the kube-public Namespace.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;kube-node-lease
The kube-node-lease Namespace was introduced to improve cluster performance and scalability.
It maintains Node lease objects, which record the heartbeat and availability status of each Node.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each Node has its own lease object, helping the Kubernetes control plane efficiently detect Node failures.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;default
The default Namespace is where Kubernetes places all resources if no Namespace is explicitly specified.
When you create a Pod, Service, or ConfigMap without assigning a Namespace, it’s automatically placed in default.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Creating Namespaces&lt;br&gt;
You can create new Namespaces in two ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Using CLI Commands
Run the following command to create a Namespace named my-ns:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;kubectl create namespace my-ns&lt;br&gt;
To verify:&lt;/p&gt;

&lt;p&gt;kubectl get namespaces&lt;br&gt;
You’ll see my-ns listed among the available Namespaces.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Using a Configuration File
It’s often better to define Namespaces using a YAML configuration file — this provides version control and traceability within your infrastructure code.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;apiVersion: v1&lt;br&gt;
kind: Namespace&lt;br&gt;
metadata:&lt;br&gt;
  name: development&lt;br&gt;
  labels:&lt;br&gt;
    name: development&lt;br&gt;
Apply the file:&lt;/p&gt;

&lt;p&gt;kubectl create -f namespace.yaml&lt;br&gt;
Creating Components in the Default Namespace&lt;br&gt;
When no Namespace is specified, Kubernetes automatically creates resources in the default Namespace.&lt;/p&gt;

&lt;p&gt;Example ConfigMap:&lt;/p&gt;

&lt;p&gt;apiVersion: v1&lt;br&gt;
kind: ConfigMap&lt;br&gt;
metadata:&lt;br&gt;
  name: my-configmap&lt;br&gt;
data:&lt;br&gt;
  db_url: my-service.database&lt;br&gt;
Apply the ConfigMap:&lt;/p&gt;

&lt;p&gt;kubectl apply -f my-config-map.yaml&lt;br&gt;
To verify which Namespace it belongs to:&lt;/p&gt;

&lt;p&gt;kubectl get configmap -n default&lt;br&gt;
Or to view detailed YAML output:&lt;/p&gt;

&lt;p&gt;kubectl get configmap my-configmap -n default -o yaml&lt;br&gt;
You’ll see that the Namespace field is set to default.&lt;/p&gt;

&lt;p&gt;Creating Components in a Custom Namespace&lt;br&gt;
You can assign resources to a specific Namespace in two ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Specifying Namespace via CLI
Add the --namespace flag when applying the resource file:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;kubectl apply -f my-config-map.yaml --namespace=my-ns&lt;br&gt;
Make sure the Namespace (my-ns) already exists, or you’ll get an error.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Specifying Namespace in the YAML File
Include the Namespace directly in the metadata section:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;apiVersion: v1&lt;br&gt;
kind: ConfigMap&lt;br&gt;
metadata:&lt;br&gt;
  name: my-configmap&lt;br&gt;
  namespace: my-ns&lt;br&gt;
data:&lt;br&gt;
  db_url: my-service.database&lt;br&gt;
Apply the file:&lt;/p&gt;

&lt;p&gt;kubectl apply -f my-config-map.yaml&lt;br&gt;
Then verify:&lt;/p&gt;

&lt;p&gt;kubectl get configmap -n my-ns -o yaml&lt;br&gt;
The output will confirm that the ConfigMap now belongs to the my-ns Namespace.&lt;/p&gt;

&lt;p&gt;Changing the Active Namespace&lt;br&gt;
By default, your active Namespace is default.&lt;br&gt;
To easily switch between Namespaces, use the kubens tool from kubectx.&lt;/p&gt;

&lt;p&gt;Install kubectx and kubens&lt;br&gt;
sudo snap install kubectx --classic&lt;br&gt;
View All Namespaces&lt;br&gt;
kubens&lt;br&gt;
The active Namespace will appear highlighted in green.&lt;/p&gt;

&lt;p&gt;Switch to Another Namespace&lt;br&gt;
kubens my-ns&lt;br&gt;
Now, your active Namespace is set to my-ns, meaning any subsequent resource creation (without specifying a Namespace) will default to this Namespace.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
Kubernetes Namespaces are essential for managing complex, multi-team, or multi-project environments within a single cluster.&lt;br&gt;
They offer logical separation, resource isolation, and simplified access control — making them a fundamental part of any production-grade Kubernetes architecture.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>aws</category>
      <category>azure</category>
    </item>
    <item>
      <title>Kubernetes Pods: How to Create and Manage Them</title>
      <dc:creator>Naveen Jayachandran</dc:creator>
      <pubDate>Mon, 03 Nov 2025 17:20:04 +0000</pubDate>
      <link>https://forem.com/naveen_jayachandran/kubernetes-pods-how-to-create-and-manage-them-28hd</link>
      <guid>https://forem.com/naveen_jayachandran/kubernetes-pods-how-to-create-and-manage-them-28hd</guid>
      <description>&lt;p&gt;A Pod represents a single instance of a running process in your Kubernetes cluster and can contain one or more containers. Think of a Pod as a lightweight, application-specific logical host. All containers within a Pod are co-located on the same worker node and share the same execution environment.&lt;/p&gt;

&lt;p&gt;This shared context is what makes Pods special, and it includes:&lt;/p&gt;

&lt;p&gt;Shared Networking&lt;br&gt;
Each Pod gets a unique IP address. All containers within that Pod share this IP and port space, allowing them to communicate with each other over localhost.&lt;/p&gt;

&lt;p&gt;Shared Storage&lt;br&gt;
Containers in a Pod can share storage volumes, providing a common filesystem for data exchange and persistence.&lt;/p&gt;

&lt;p&gt;The Pod Lifecycle&lt;br&gt;
A Pod progresses through several lifecycle phases:&lt;/p&gt;

&lt;p&gt;Pending: The Pod has been accepted by the Kubernetes system, but one or more container images have not yet been created. This could be due to image download delays or the scheduler finding a suitable node.&lt;/p&gt;

&lt;p&gt;Running: The Pod is bound to a node, and all containers are created. At least one container is running or is in the process of starting or restarting.&lt;/p&gt;

&lt;p&gt;Succeeded: All containers have terminated successfully (exit status 0) and will not restart. This phase is typical for batch jobs.&lt;/p&gt;

&lt;p&gt;Failed: All containers have terminated, and at least one container failed (non-zero exit code).&lt;/p&gt;

&lt;p&gt;Unknown: The Pod state cannot be determined—often due to a network issue with the node.&lt;/p&gt;

&lt;p&gt;Kubernetes Pods Overview&lt;br&gt;
In Kubernetes, Pods are represented as circles in diagrams, with cube-like structures as containers and cylinder-like structures as shared volumes.&lt;/p&gt;

&lt;p&gt;A Pod can contain one or more containers, all sharing:&lt;/p&gt;

&lt;p&gt;The same IP address&lt;/p&gt;

&lt;p&gt;Storage volumes&lt;/p&gt;

&lt;p&gt;Network resources&lt;/p&gt;

&lt;p&gt;Other required configurations&lt;/p&gt;

&lt;p&gt;Pods make it easier to move containers around the cluster. They are managed by controllers, which handle:&lt;/p&gt;

&lt;p&gt;Rollouts: Deploying new versions of Pods&lt;/p&gt;

&lt;p&gt;Replication: Maintaining the desired number of Pods&lt;/p&gt;

&lt;p&gt;Health monitoring: Restarting or replacing failed Pods&lt;/p&gt;

&lt;p&gt;If a node fails, Kubernetes controllers automatically recreate the affected Pods on another node to maintain availability.&lt;/p&gt;

&lt;p&gt;Common Controllers&lt;br&gt;
Jobs → For batch tasks that run once and complete (ephemeral workloads).&lt;/p&gt;

&lt;p&gt;Deployments → For stateless or persistent apps (like web services).&lt;/p&gt;

&lt;p&gt;StatefulSets → For stateful, persistent apps (like databases).&lt;/p&gt;

&lt;p&gt;Pod Operating System&lt;br&gt;
The operating system inside a Pod depends on the container image it uses. For example:&lt;/p&gt;

&lt;p&gt;An Ubuntu-based image → Ubuntu OS inside the Pod&lt;/p&gt;

&lt;p&gt;An Alpine Linux image → Alpine Linux OS inside the Pod&lt;/p&gt;

&lt;p&gt;The choice of base image depends on the application’s requirements and the developer’s preference.&lt;/p&gt;

&lt;p&gt;Pods and Controllers&lt;br&gt;
Pods are ephemeral — they don’t get rescheduled once they expire. Therefore, we generally don’t create Pods directly. Instead, we use higher-level objects that manage Pods automatically, such as:&lt;/p&gt;

&lt;p&gt;Deployments&lt;/p&gt;

&lt;p&gt;Replication Controllers&lt;/p&gt;

&lt;p&gt;ReplicaSets&lt;/p&gt;

&lt;p&gt;These controllers maintain Pod replicas and ensure high availability.&lt;/p&gt;

&lt;p&gt;Getting Started with Kubernetes Pods&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Pod Imperatively
A quick way to test your cluster setup is by creating a simple Nginx Pod:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;kubectl run nginx --image=nginx&lt;br&gt;
This command creates a Pod named nginx using the official Nginx image.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Generate a Declarative Manifest
Instead of writing YAML manually, you can generate it using kubectl:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;kubectl run nginx --image=nginx --dry-run=client -o yaml &amp;gt; pod.yaml&lt;br&gt;
The --dry-run=client -o yaml flags generate YAML output without actually creating the Pod.&lt;/p&gt;

&lt;p&gt;Your pod.yaml will look like this:&lt;/p&gt;

&lt;p&gt;apiVersion: v1&lt;br&gt;
kind: Pod&lt;br&gt;
metadata:&lt;br&gt;
  creationTimestamp: null&lt;br&gt;
  labels:&lt;br&gt;
    run: nginx&lt;br&gt;
  name: nginx&lt;br&gt;
spec:&lt;br&gt;
  containers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;image: nginx
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
Key Fields:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;apiVersion: API version (e.g., v1)&lt;/p&gt;

&lt;p&gt;kind: Type of object (Pod)&lt;/p&gt;

&lt;p&gt;metadata: Object identifiers such as name and labels&lt;/p&gt;

&lt;p&gt;spec: Desired configuration — containers, images, ports, etc.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Pod Declaratively
First, delete the Pod created imperatively:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;kubectl delete pod nginx&lt;br&gt;
Now, create it from the YAML manifest:&lt;/p&gt;

&lt;p&gt;kubectl apply -f pod.yaml&lt;br&gt;
Running kubectl apply again after modifying the YAML intelligently updates the Pod to match the new desired state.&lt;/p&gt;

&lt;p&gt;Inspecting and Interacting with Your Pod&lt;br&gt;
Get Pod Status&lt;br&gt;
kubectl get pods&lt;br&gt;
Example output:&lt;/p&gt;

&lt;p&gt;NAME    READY   STATUS    RESTARTS   AGE&lt;br&gt;
nginx   1/1     Running   0          60s&lt;br&gt;
Describe the Pod&lt;br&gt;
For detailed status, events, and configuration:&lt;/p&gt;

&lt;p&gt;kubectl describe pod nginx&lt;br&gt;
View Container Logs&lt;br&gt;
kubectl logs nginx&lt;br&gt;
Execute Commands Inside a Container&lt;br&gt;
To open an interactive shell inside the container:&lt;/p&gt;

&lt;p&gt;kubectl exec -it nginx -- bash&lt;br&gt;
Once inside, use standard Linux commands (ls, cat, ps, etc.) to inspect the container.&lt;/p&gt;

&lt;p&gt;Advanced Patterns: Multi-Container Pods&lt;br&gt;
Pods can include multiple containers that work closely together. Two common design patterns are:&lt;/p&gt;

&lt;p&gt;Init Containers&lt;br&gt;
Init Containers run before the main application containers and must complete successfully before the main app starts.&lt;/p&gt;

&lt;p&gt;Use Cases:&lt;/p&gt;

&lt;p&gt;Waiting for a dependent service (e.g., database)&lt;/p&gt;

&lt;p&gt;Running setup or migration scripts&lt;/p&gt;

&lt;p&gt;Cloning a Git repository into a shared volume&lt;/p&gt;

&lt;p&gt;Registering the Pod with a central service&lt;/p&gt;

&lt;p&gt;Sidecar Containers&lt;br&gt;
Sidecars run alongside the main application containers throughout the Pod’s lifecycle. They extend or enhance the main app’s functionality.&lt;/p&gt;

&lt;p&gt;Use Cases:&lt;/p&gt;

&lt;p&gt;Logging: Collect and forward logs&lt;/p&gt;

&lt;p&gt;Monitoring: Gather metrics for Prometheus&lt;/p&gt;

&lt;p&gt;Service Mesh Proxy: Handle traffic via Istio or Linkerd&lt;/p&gt;

&lt;p&gt;Data Sync: Sync files from S3 or Git&lt;/p&gt;

&lt;p&gt;Pod Communication&lt;br&gt;
Internal Communication&lt;br&gt;
Containers in the same Pod communicate via localhost.&lt;/p&gt;

&lt;p&gt;Inter-Pod Communication&lt;br&gt;
Pods within the same cluster communicate using cluster-private IPs assigned by Kubernetes networking.&lt;/p&gt;

&lt;p&gt;If external access is required, you can expose a Pod using a Service.&lt;/p&gt;

&lt;p&gt;Updating and Replacing Pods&lt;br&gt;
Kubernetes manages Pod updates gracefully:&lt;/p&gt;

&lt;p&gt;Updating Pods: Makes configuration or image changes while keeping services running (rolling updates).&lt;/p&gt;

&lt;p&gt;Pod Replacement: When a Pod crashes or is terminated, Kubernetes automatically recreates a new Pod instance — ensuring continuous availability.&lt;/p&gt;

&lt;p&gt;Static Pods&lt;br&gt;
Static Pods are created directly by the kubelet on a node — not by the Kubernetes control plane.&lt;/p&gt;

&lt;p&gt;You define them by placing a Pod manifest file in /etc/kubernetes/manifests/. The kubelet automatically starts and monitors the Pod on that node.&lt;/p&gt;

&lt;p&gt;Static Pods are commonly used for control plane components like kube-apiserver, kube-scheduler, and etcd.&lt;/p&gt;

&lt;p&gt;Basic Kubectl Commands for Kubernetes Pods&lt;br&gt;
Create a Pod&lt;br&gt;
kubectl create -f &lt;br&gt;
For example, to create a Pod named AskTech:&lt;/p&gt;

&lt;p&gt;kubectl create -f asktech-pod.yaml&lt;br&gt;
Delete a Pod&lt;br&gt;
kubectl delete -f &lt;br&gt;
This deletes the Pod defined in the file.&lt;/p&gt;

&lt;p&gt;Get Pods&lt;br&gt;
kubectl get pod  --namespace &lt;br&gt;
Lists Pods within a specified namespace.&lt;/p&gt;

&lt;p&gt;Troubleshooting with kubectl&lt;br&gt;
kubectl get pods → List all Pods in the current namespace.&lt;/p&gt;

&lt;p&gt;kubectl describe pod  → Get detailed Pod info.&lt;/p&gt;

&lt;p&gt;kubectl logs  → Retrieve logs from a specific Pod.&lt;/p&gt;

&lt;p&gt;✅ In Summary:&lt;br&gt;
Kubernetes Pods are the fundamental execution unit in your cluster, enabling efficient container orchestration, networking, and storage sharing. While you can create Pods manually, managing them via higher-level controllers like Deployments or StatefulSets ensures reliability, scalability, and resilience for your workloads.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>azure</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>Kubernetes - Labels &amp; Selectors</title>
      <dc:creator>Naveen Jayachandran</dc:creator>
      <pubDate>Mon, 03 Nov 2025 17:19:00 +0000</pubDate>
      <link>https://forem.com/naveen_jayachandran/kubernetes-labels-selectors-43i6</link>
      <guid>https://forem.com/naveen_jayachandran/kubernetes-labels-selectors-43i6</guid>
      <description>&lt;p&gt;Introduction&lt;br&gt;
In Kubernetes, labels are simple key-value pairs attached to objects such as Pods, Deployments, and Services. They are the foundation of resource organization and selection, allowing you to efficiently group, filter, and manage related objects within your cluster.&lt;/p&gt;

&lt;p&gt;Example label sets:&lt;/p&gt;

&lt;p&gt;app: web-server&lt;br&gt;
tier: frontend&lt;br&gt;
environment: development&lt;br&gt;
release: stable&lt;br&gt;
Labels help answer questions like:&lt;/p&gt;

&lt;p&gt;“Which Pods are part of my frontend tier?”&lt;/p&gt;

&lt;p&gt;“Which resources belong to the staging environment?”&lt;/p&gt;

&lt;p&gt;“Which release version is currently running?”&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Working with Labels
a. Creating a Pod with Labels
Create a Pod manifest with labels.
Save the following as labeled-pod.yaml:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;apiVersion: v1&lt;br&gt;
kind: Pod&lt;br&gt;
metadata:&lt;br&gt;
  name: web-server-pod&lt;br&gt;
  labels:&lt;br&gt;
    app: nginx&lt;br&gt;
    tier: frontend&lt;br&gt;
    environment: development&lt;br&gt;
spec:&lt;br&gt;
  containers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: nginx-container
image: nginx
Apply the manifest:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;kubectl apply -f labeled-pod.yaml&lt;br&gt;
b. Viewing Labels&lt;br&gt;
To list Pods with their labels:&lt;/p&gt;

&lt;p&gt;kubectl get pods --show-labels&lt;br&gt;
Output:&lt;/p&gt;

&lt;p&gt;NAME             READY   STATUS    RESTARTS   AGE   LABELS&lt;br&gt;
web-server-pod   1/1     Running   0          30s   app=nginx,environment=development,tier=frontend&lt;br&gt;
To display specific labels as columns:&lt;/p&gt;

&lt;p&gt;kubectl get pods -L app,environment&lt;br&gt;
c. Adding and Modifying Labels&lt;br&gt;
Add a new label to an existing Pod:&lt;/p&gt;

&lt;p&gt;kubectl label pod web-server-pod owner=admin&lt;br&gt;
Verify the update:&lt;/p&gt;

&lt;p&gt;kubectl get pods --show-labels&lt;br&gt;
Update an existing label (requires the --overwrite flag):&lt;/p&gt;

&lt;p&gt;kubectl label pod web-server-pod environment=staging --overwrite&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Understanding Selectors
If labels are the tags that describe objects, selectors are the filters used to find and group those objects.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Selectors are widely used in:&lt;/p&gt;

&lt;p&gt;Services — to select which Pods to send traffic to.&lt;/p&gt;

&lt;p&gt;Deployments — to define which Pods belong to the deployment.&lt;/p&gt;

&lt;p&gt;ReplicaSets — to manage a specific group of Pods.&lt;/p&gt;

&lt;p&gt;There are two main types of selectors:&lt;/p&gt;

&lt;p&gt;a. Equality-Based Selectors&lt;br&gt;
Filter objects based on exact key-value matches.&lt;/p&gt;

&lt;p&gt;Supported operators:&lt;/p&gt;

&lt;p&gt;=, ==, !=&lt;br&gt;
b. Set-Based Selectors&lt;br&gt;
Allow more flexible filtering by checking if a key’s value exists within (or outside) a set.&lt;/p&gt;

&lt;p&gt;Supported operators:&lt;/p&gt;

&lt;p&gt;in, notin, exists&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Hands-On: Using Selectors
Let’s create two more Pods for demonstration.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;a. database-pod.yaml&lt;br&gt;
apiVersion: v1&lt;br&gt;
kind: Pod&lt;br&gt;
metadata:&lt;br&gt;
  name: database-pod&lt;br&gt;
  labels:&lt;br&gt;
    app: postgres&lt;br&gt;
    tier: backend&lt;br&gt;
    environment: development&lt;br&gt;
spec:&lt;br&gt;
  containers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: postgres-container
image: postgres
b. api-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: api-pod
labels:
app: user-api
tier: backend
environment: production
spec:
containers:&lt;/li&gt;
&lt;li&gt;name: api-container
image: gcr.io/google-samples/hello-app:1.0
Apply both:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;kubectl apply -f db-pod.yaml -f api-pod.yaml&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Querying Pods with Selectors
Equality-Based Selection
Find all Pods in the development environment:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;kubectl get pods -l environment=development&lt;br&gt;
Find all Pods belonging to the backend tier:&lt;/p&gt;

&lt;p&gt;kubectl get pods -l tier=backend&lt;br&gt;
Combine multiple filters (logical AND):&lt;/p&gt;

&lt;p&gt;kubectl get pods -l 'tier=backend,environment=production'&lt;br&gt;
Set-Based Selection&lt;br&gt;
Find all Pods in either the development or staging environments:&lt;/p&gt;

&lt;p&gt;kubectl get pods -l 'environment in (development,staging)'&lt;br&gt;
Find all Pods that do not belong to the frontend tier:&lt;/p&gt;

&lt;p&gt;kubectl get pods -l 'tier notin (frontend)'&lt;br&gt;
Find all Pods that simply have an app label (regardless of value):&lt;/p&gt;

&lt;p&gt;kubectl get pods -l 'app'&lt;br&gt;
Summary&lt;br&gt;
Concept Purpose Example&lt;br&gt;
Label   Key/value metadata attached to objects  app=nginx&lt;br&gt;
Selector    Filters resources by labels -l environment=dev&lt;br&gt;
Equality-Based Selector Matches exact key/value tier=backend&lt;br&gt;
Set-Based Selector  Matches from a set or existence environment in (dev, staging)&lt;br&gt;
Key Takeaways&lt;br&gt;
Labels are the backbone of organization in Kubernetes.&lt;/p&gt;

&lt;p&gt;Selectors use those labels to logically group, filter, and manage resources.&lt;/p&gt;

&lt;p&gt;Kubernetes controllers (Deployments, Services, ReplicaSets) rely heavily on label selectors to define their scope.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>azure</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>Kubernetes - Jobs</title>
      <dc:creator>Naveen Jayachandran</dc:creator>
      <pubDate>Mon, 03 Nov 2025 17:17:57 +0000</pubDate>
      <link>https://forem.com/naveen_jayachandran/kubernetes-jobs-5dc5</link>
      <guid>https://forem.com/naveen_jayachandran/kubernetes-jobs-5dc5</guid>
      <description>&lt;p&gt;In the Kubernetes world, jobs are considered an object that acts as a supervisor or controller for a task. The Kubernetes Job will create a Pod, monitor the task, and recreate another one if that Pod fails for some reason. Upon completion of the task, it will terminate the Pod. Unlike Deployments and Pods, you can specify a Job in Kubernetes which can be an always-running job, a time-based job, or a task-based job. This allows you to tolerate errors or failures that can cause unexpected Pod termination.&lt;/p&gt;

&lt;p&gt;When you submit a Job, it will create one or more Pods based on the requirement, complete the defined task, and keep the Pods running until the task is finished. The Job keeps track of successful completions when Pods finish execution. When a Job is suspended, all of its active Pods are deleted until the Job is restarted.&lt;/p&gt;

&lt;p&gt;Job Types&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Non-Parallel Job&lt;br&gt;
A simple job where a single task is defined. It will create one Pod, and upon successful completion, the Pod will terminate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Parallel Job with Fixed Completion Count&lt;br&gt;
A more complex task that requires multiple Pods to complete. These Pods run in parallel, and each Pod receives a unique index between 0 and .spec.completions - 1, based on the number specified in the configuration. The Job is considered successful when the number of successful Pod completions matches .spec.completions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Parallel Job with a Work Queue&lt;br&gt;
In this type, multiple Pods run in parallel to complete complex workloads with dependencies. To decide what each Pod should work on, the Pods cooperate with one another or with an external service.&lt;br&gt;
For instance, a Pod might pull up to N items in a batch from a work queue. Each Pod autonomously determines whether all peers are finished and whether the entire Job is complete.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Use Cases&lt;br&gt;
A simple use case for a Job is to perform system operations tasks. For example, when setting up a cluster or service that requires repeated setup tasks, you can create a Job and reuse it to bring up the same service or perform similar tasks.&lt;/p&gt;

&lt;p&gt;Another use case is performing data backups or computation tasks.&lt;/p&gt;

&lt;p&gt;A more complex scenario involves a series of tasks that must be executed in order, where Jobs create and manage Pods until the specified number of completions is reached.&lt;/p&gt;

&lt;p&gt;Helm Charts also employ Jobs to run installation, setup, or test commands on clusters during service provisioning.&lt;/p&gt;

&lt;p&gt;Key Terminologies&lt;br&gt;
Kubernetes: An open-source system from Google for orchestrating containers, automating most operational tasks around containerized applications.&lt;/p&gt;

&lt;p&gt;Pods: The smallest deployable compute units in Kubernetes, which can contain one or more containers.&lt;/p&gt;

&lt;p&gt;Minikube: A local version of Kubernetes that helps you start and test your workloads locally.&lt;/p&gt;

&lt;p&gt;Steps to Set Up a Job&lt;br&gt;
Let’s consider an example of setting up a Job using the Docker BusyBox image that pings asktech.org.&lt;/p&gt;

&lt;p&gt;Step 1. Start your Minikube&lt;br&gt;
$ minikube start&lt;br&gt;
Step 2. Create the Job definition file in YAML format&lt;br&gt;
$ cat ping-job.yaml&lt;br&gt;
Job Definition file in YAML format&lt;/p&gt;

&lt;p&gt;Step 3. Submit the Job definition to Kubernetes&lt;br&gt;
$ kubectl apply -f ping-job.yaml&lt;br&gt;
You should see a message indicating that the Job has been created.&lt;/p&gt;

&lt;p&gt;Step 4. List all Jobs&lt;br&gt;
$ kubectl get jobs&lt;br&gt;
You can see the number of completions, duration, and age of the Job.&lt;/p&gt;

&lt;p&gt;Step 5. Get Job Details&lt;br&gt;
$ kubectl describe job ping&lt;br&gt;
This command displays detailed information about the Job.&lt;/p&gt;

&lt;p&gt;Step 6. Get the Pods Running for the Job&lt;br&gt;
$ kubectl get pods&lt;br&gt;
You can view the Pod name, container count, status, restart count, and age.&lt;/p&gt;

&lt;p&gt;Step 7. Check Container Logs&lt;br&gt;
You can also check the logs of the running container to see the output of your Job.&lt;/p&gt;

&lt;p&gt;Step 8. Delete the Job&lt;br&gt;
To delete the Job and its associated Pods:&lt;/p&gt;

&lt;p&gt;$ kubectl delete job ping&lt;br&gt;
$ kubectl get jobs&lt;br&gt;
Once deleted, the Pods associated with the Job are also removed.&lt;/p&gt;

&lt;p&gt;✅ Final Note:&lt;br&gt;
Kubernetes Jobs are powerful tools for running finite, batch-oriented tasks. Whether you’re performing database migrations, running test suites, or doing scheduled backups, Jobs provide reliability and fault tolerance through their built-in restart and completion tracking mechanisms.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>azure</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
