<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Md Khurshid </title>
    <description>The latest articles on Forem by Md Khurshid  (@alikhere).</description>
    <link>https://forem.com/alikhere</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/alikhere"/>
    <language>en</language>
    <item>
      <title>Kubernetes 102: Setting Up Your First Cluster and Core Concepts 🚀</title>
      <dc:creator>Md Khurshid </dc:creator>
      <pubDate>Wed, 17 Sep 2025 19:11:49 +0000</pubDate>
      <link>https://forem.com/alikhere/kubernetes-102-setting-up-your-first-cluster-and-core-concepts-52j5</link>
      <guid>https://forem.com/alikhere/kubernetes-102-setting-up-your-first-cluster-and-core-concepts-52j5</guid>
      <description>&lt;p&gt;In the previous post &lt;a href="https://dev.to/alikhere/kubernetes-101-understanding-the-basics-features-and-architecture-3d56"&gt;Kubernetes 101&lt;/a&gt;, we learned what Kubernetes is, its features, and how it works behind the scenes.&lt;/p&gt;

&lt;p&gt;Now it’s time to get hands-on! In this guide, we’ll:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Install a lightweight Kubernetes cluster (using K3s)&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;2. Explore the basic concepts of Kubernetes (nodes, pods, deployments, etc.)&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;3. Learn how to interact with Kubernetes using kubectl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s get started.&lt;/p&gt;
&lt;h2&gt;
  
  
  ⚙️ Installing Kubernetes (K3s)
&lt;/h2&gt;

&lt;p&gt;There are multiple ways to install Kubernetes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minikube – runs Kubernetes inside a VM&lt;/li&gt;
&lt;li&gt;Kind – runs Kubernetes using Docker containers&lt;/li&gt;
&lt;li&gt;MicroK8s – Canonical’s lightweight K8s for Linux&lt;/li&gt;
&lt;li&gt;K3s – an ultra-lightweight distribution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 For this tutorial, we’ll use K3s because it’s super lightweight, easy to install, and comes with everything you need (including kubectl).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Install K3s&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run this command to install K3s:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -sfL https://get.k3s.io | sh -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will download and install the latest version of K3s, then start it as a system service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Configure kubectl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, copy the kubeconfig file so that kubectl can talk to your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir -p ~/.kube
$ sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
$ sudo chown $USER:$USER ~/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tell your shell to use this config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ export KUBECONFIG=~/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 Add this line to ~/.bashrc or ~/.zshrc to make it permanent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Verify the Cluster&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME       STATUS   ROLES                  AGE   VERSION
myhost     Ready    control-plane,master   2m    v1.31.0+k3s1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🎉 Congrats! Your Kubernetes cluster is up and running!&lt;/p&gt;

&lt;h2&gt;
  
  
  🔑 Kubernetes Basic Terms and Concepts
&lt;/h2&gt;

&lt;p&gt;Before we deploy apps, let’s understand the building blocks of Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Nodes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="noopener noreferrer"&gt;Nodes&lt;/a&gt; are the machines (computers or VMs) that make up your Kubernetes cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They actually run the containers for your apps.&lt;/li&gt;
&lt;li&gt;Kubernetes keeps track of every node’s health and status.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Namespaces&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3varzcyt410dfde84j1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3varzcyt410dfde84j1.png" alt=" " width="800" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="noopener noreferrer"&gt;Namespaces&lt;/a&gt; are like separate rooms inside the cluster.&lt;br&gt;
They help organize and isolate resources, so names don’t clash.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Two Pods in the same namespace cannot have the same name.&lt;/li&gt;
&lt;li&gt;But two Pods with the same name can exist in different namespaces.&lt;/li&gt;
&lt;li&gt;Useful for teams, projects, or different environments (like dev, test, prod).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Pods&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="noopener noreferrer"&gt;Pods&lt;/a&gt; are the smallest unit in Kubernetes.&lt;br&gt;
They are like a wrapper around one or more containers that must run together on the same node.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Often, one Pod = one container.&lt;/li&gt;
&lt;li&gt;But a Pod can have multiple containers that share storage and network.&lt;/li&gt;
&lt;li&gt;Special containers (init or ephemeral) can be added for setup or debugging.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. ReplicaSets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="noopener noreferrer"&gt;ReplicaSets&lt;/a&gt; make sure the right number of Pod copies (replicas) are always running.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If a Pod crashes or a Node fails, the ReplicaSet creates a new Pod automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Deployments&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauptxtcgafamo282gizs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauptxtcgafamo282gizs.png" alt=" " width="682" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noopener noreferrer"&gt;Deployment&lt;/a&gt; is a higher-level controller on top of ReplicaSets.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You declare how many Pods you want and what version of the app to run.&lt;/li&gt;
&lt;li&gt;Kubernetes then updates or rolls back automatically.&lt;/li&gt;
&lt;li&gt;You can pause, scale, or roll back easily.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;6. Services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noopener noreferrer"&gt;Services&lt;/a&gt; expose Pods to the network so other apps or users can reach them.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They provide a stable IP or DNS name even if Pods come and go.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noopener noreferrer"&gt;Ingress&lt;/a&gt; works with Services to set up HTTP/HTTPS routes and &lt;a href="https://kubernetes.io/docs/concepts/services-networking/" rel="noopener noreferrer"&gt;load balancing&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;You can also add TLS certificates for HTTPS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;7. Jobs&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="noopener noreferrer"&gt;Jobs&lt;/a&gt; run one-time or batch tasks.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Job creates Pods and waits until they finish.&lt;/li&gt;
&lt;li&gt;If a Pod fails, it retries until the task completes.&lt;/li&gt;
&lt;li&gt;CronJobs are like Jobs with a schedule (e.g., run every night at 1 a.m.).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;8. Volumes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="noopener noreferrer"&gt;Volumes&lt;/a&gt; are storage that lives outside a Pod’s life.&lt;/p&gt;

&lt;p&gt;They let Pods store data that doesn’t disappear when a Pod restarts.&lt;br&gt;
Good for databases or file servers.&lt;br&gt;
Kubernetes works with many storage types (cloud disks, local disks, etc.).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Secrets and ConfigMaps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noopener noreferrer"&gt;Secrets&lt;/a&gt; store sensitive data like passwords, API keys, or certificates.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="noopener noreferrer"&gt;ConfigMaps&lt;/a&gt; store normal configuration like app settings.&lt;/li&gt;
&lt;li&gt;Both can be given to Pods as environment variables or as files mounted in a volume.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;10. DaemonSets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0nks3e92fsjak0n7jo51.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0nks3e92fsjak0n7jo51.png" alt=" " width="621" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="noopener noreferrer"&gt;DaemonSet&lt;/a&gt; makes sure one Pod runs on every Node in the cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Useful for things that must run everywhere, like: Logging agents, Monitoring tools&lt;/li&gt;
&lt;li&gt;When a new Node joins, Kubernetes automatically runs the DaemonSet Pod on it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;11. Network Policies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="noopener noreferrer"&gt;Network Policies&lt;/a&gt; are rules for traffic between Pods.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They control who can talk to whom inside the cluster.&lt;/li&gt;
&lt;li&gt;Two types of rules:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Ingress:&lt;/strong&gt; control incoming traffic.&lt;br&gt;
&lt;strong&gt;Egress:&lt;/strong&gt; control outgoing traffic.&lt;/p&gt;

&lt;p&gt;If a policy denies traffic, Pods cannot connect.&lt;/p&gt;
&lt;h2&gt;
  
  
  Using Kubectl to interact with Kubernetes
&lt;/h2&gt;

&lt;p&gt;Now you’re familiar with the basics, you can start adding workloads to your cluster with Kubectl. Here’s a quick reference for some key commands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;List Pods&lt;/strong&gt;&lt;br&gt;
This displays the Pods in your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods
No resources found in default namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Specify a namespace with the -n or --namespace flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -n demo
No resources found in demo namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, get Pods from all your namespaces by specifying --all-namespaces:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   coredns-b96499967-4xdpg                   1/1     Running     0          114m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;...&lt;br&gt;
This includes Kubernetes system components.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a Pod&lt;/strong&gt;&lt;br&gt;
Create a Pod with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl run nginx --image nginx:latest
pod/nginx created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This starts a Pod called nginx that will run the nginx:latest container image.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a Deployment&lt;/strong&gt;&lt;br&gt;
Creating a Deployment lets you scale multiple replicas of a container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create deployment nginx --image nginx:latest --replicas 3
deployment.apps/nginx created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll see three Pods are created, each running the nginx:latest image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7597c656c9-4qs55   1/1     Running   0          51s
nginx-7597c656c9-gdjl9   1/1     Running   0          51s
nginx-7597c656c9-7sxrc   1/1     Running   0          51s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Scale a Deployment&lt;/strong&gt;&lt;br&gt;
Now use this command to increase the replica count:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl scale deployment nginx --replicas 5
deployment.apps/nginx scaled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kubernetes has created two extra Pods to provide additional capacity:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7597c656c9-4qs55   1/1     Running   0          2m26s
nginx-7597c656c9-gdjl9   1/1     Running   0          2m26s
nginx-7597c656c9-7sxrc   1/1     Running   0          2m26s
nginx-7597c656c9-kwm6q   1/1     Running   0          2s
nginx-7597c656c9-nwf2s   1/1     Running   0          2s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Expose a Service&lt;/strong&gt;&lt;br&gt;
Now let’s make this NGINX server accessible.&lt;/p&gt;

&lt;p&gt;Run the following command to create a service that’s exposed on a port of the Node running the Pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl expose deployment/nginx --port 80 --type NodePort
service/nginx exposed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Discover the port that’s been assigned by running this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get services
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.43.0.1      &amp;lt;none&amp;gt;        443/TCP        121m
nginx        NodePort    10.43.149.39   &amp;lt;none&amp;gt;        80:30226/TCP   3s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The port is 30226. Visiting :30226 in your browser will show the default NGINX landing page.&lt;/p&gt;

&lt;p&gt;You can use localhost as  if you’ve been following along with the single-node K3s cluster created in this tutorial.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
$ kubectl get nodes -o wide
NAME       STATUS   ROLES                  AGE    VERSION        INTERNAL-IP
ubuntu22   Ready    control-plane,master   124m   v1.24.4+k3s1   192.168.122.210
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Using port forwarding&lt;/strong&gt;&lt;br&gt;
You can access a service without binding it to a Node port by using Kubectl’s integrated &lt;a href="https://kubernetes.io/docs/reference/kubectl/generated/kubectl_port-forward/" rel="noopener noreferrer"&gt;port-forwarding&lt;/a&gt; functionality. Delete your first service and create a new one without the --type flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl delete service nginx
service/nginx deleted

$ kubectl expose deployment/nginx –port 80
service/nginx exposed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a ClusterIP service that can be accessed on an internal IP, within the cluster.&lt;/p&gt;

&lt;p&gt;Retrieve the service’s details by running this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get services
NAME        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
nginx       ClusterIP   10.100.191.238   &amp;lt;none&amp;gt;     80/TCP  2s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The service can be accessed inside the cluster at 10.100.191.238:80.&lt;/p&gt;

&lt;p&gt;You can reach this address from your local machine with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl port-forward service/nginx 8080:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Visiting localhost:8080 in your browser will display the NGINX landing page. Kubectl is redirecting traffic to the service inside your cluster. You can press Ctrl+C in your terminal to stop the port forwarding session when you’re done.&lt;/p&gt;

&lt;p&gt;Port forwarding works without services too. You can directly connect to a Pod in your deployment with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl port-forward deployment/nginx 8080:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Visiting localhost:8080 will again display the NGINX landing page, this time without going through a service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apply a YAML file&lt;/strong&gt;&lt;br&gt;
Finally, let’s see how to apply a declarative YAML file to your cluster. First, write a simple Kubernetes manifest for your Pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
    - name: nginx
      image: nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save this manifest to nginx.yaml and run kubectl apply to automatically create your Pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f nginx.yaml
pod/nginx created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can repeat the command after you modify the file to apply any changes to your cluster.&lt;/p&gt;

&lt;p&gt;Now you’re familiar with the basics of using Kubectl to interact with Kubernetes!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🎯 Wrapping Up&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes is the leading container orchestrator, and in this blog we explored its features, understood how it works, and looked at the key components that power your applications. You now have the foundation to start experimenting and building with Kubernetes. 🚀&lt;/p&gt;

&lt;p&gt;But this is just the beginning! In the upcoming blogs, we’ll dive into advanced Kubernetes topics — from real-world challenges and best practices to security, scaling, and hands-on deployments.&lt;/p&gt;

&lt;p&gt;🙌 Stay tuned for the next part of the series where we’ll go beyond the basics and make Kubernetes work for you.&lt;/p&gt;

&lt;p&gt;👉 Follow me &lt;a href="https://dev.to/alikhere"&gt;here&lt;/a&gt;, or connect with me on &lt;a href="https://www.linkedin.com/in/alikhurshidhere/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; and &lt;a href="https://x.com/alikhurshidhere" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; for updates, tips, and more developer content.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloudnative</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Kubernetes 101: Understanding the Basics, Features, and Architecture 🚀</title>
      <dc:creator>Md Khurshid </dc:creator>
      <pubDate>Mon, 01 Sep 2025 20:15:19 +0000</pubDate>
      <link>https://forem.com/alikhere/kubernetes-101-understanding-the-basics-features-and-architecture-3d56</link>
      <guid>https://forem.com/alikhere/kubernetes-101-understanding-the-basics-features-and-architecture-3d56</guid>
      <description>&lt;h2&gt;
  
  
  ☸️ What is Kubernetes?
&lt;/h2&gt;

&lt;p&gt;Kubernetes (often called &lt;a href="https://www.kubernetes.io/" rel="noopener noreferrer"&gt;K8s&lt;/a&gt;)is an open-source platform that helps you automate the deployment and management of containers. It was first created by Google and is now maintained by the &lt;a href="https://www.cncf.io/" rel="noopener noreferrer"&gt;Cloud Native Computing Foundation (CNCF).&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The reason Kubernetes has become so popular is because it solves many of the challenges that come with running containers in production. Instead of manually starting and managing containers, Kubernetes makes it possible to easily launch as many copies of your application as you need, spread them across multiple servers, and handle the networking so users can reliably access your services.&lt;/p&gt;

&lt;p&gt;In this beginner’s guide, we’ll explore:&lt;br&gt;
&lt;strong&gt;1. What Kubernetes is.&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;2. The main features it offers.&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;3. How it works behind the scenes.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Kubernetes Used For?
&lt;/h2&gt;

&lt;p&gt;Kubernetes is designed to &lt;strong&gt;manage and scale applications&lt;/strong&gt; running in containers. Containers are like small, isolated boxes that hold everything an app needs to run.&lt;/p&gt;

&lt;p&gt;Now imagine you have hundreds of these containers running across multiple servers — it’s tough to manage them manually. Kubernetes makes this easy by automating the process.&lt;/p&gt;

&lt;p&gt;Here’s what Kubernetes can do for you:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Start new apps automatically&lt;/strong&gt; when needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Restart apps&lt;/strong&gt; if they crash.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distribute workloads&lt;/strong&gt; so no server gets overloaded.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scale apps up or down&lt;/strong&gt; depending on demand.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;In short:&lt;/strong&gt; Kubernetes is like a system administrator that keeps your apps healthy and running smoothly, without you doing everything manually.&lt;/p&gt;

&lt;h2&gt;
  
  
  🤔 Is Kubernetes Easy to Learn?
&lt;/h2&gt;

&lt;p&gt;Honestly… not at first 😅. Learning Kubernetes can be a bit challenging in the beginning. Why? Because it has many moving parts, and it’s built for running apps at scale.&lt;/p&gt;

&lt;p&gt;Most developers first learn Docker, which helps run one container at a time. But &lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; alone is low-level — you still have to manage containers manually. Kubernetes, on the other hand, provides &lt;strong&gt;high-level tools and abstractions&lt;/strong&gt; so you can describe your apps in configuration files, and Kubernetes takes care of the rest.&lt;/p&gt;

&lt;p&gt;So yes, the learning curve is steep at first. But once you understand the core concepts (like &lt;strong&gt;Pods, Nodes, and Deployments&lt;/strong&gt;), things start to click.&lt;/p&gt;

&lt;h2&gt;
  
  
  ✨ Kubernetes Features
&lt;/h2&gt;

&lt;p&gt;Kubernetes is packed with features that make running containers easier and more reliable. Let’s go through the key ones:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Automated rollouts, scaling, and rollbacks&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can define how many copies (replicas) of your app should run.&lt;/li&gt;
&lt;li&gt;Kubernetes automatically spreads them across servers.&lt;/li&gt;
&lt;li&gt;If a server goes down, Kubernetes reschedules your containers.&lt;/li&gt;
&lt;li&gt;You can scale apps up or down instantly, either manually or 
automatically (based on CPU, memory, or custom metrics).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Service discovery, load balancing, and ingress&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes handles internal communication between apps.&lt;/li&gt;
&lt;li&gt;It can also expose your apps to the outside world.&lt;/li&gt;
&lt;li&gt;Load balancers ensure traffic is spread across all available instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Supports both stateless and stateful apps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initially, Kubernetes was focused on apps that don’t store data (stateless).&lt;/li&gt;
&lt;li&gt;Now, it also supports apps that need persistent data (stateful), like databases.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Storage management&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes connects your containers to storage — whether it’s cloud storage, a network drive, or your local filesystem.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Declarative state management&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You don’t have to manually tell Kubernetes what steps to take.&lt;/li&gt;
&lt;li&gt;Instead, you write a YAML file describing the state you want (e.g., “I want 3 replicas of this app”).&lt;/li&gt;
&lt;li&gt;Kubernetes then works to make the cluster match that state automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;6. Works across environments&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run Kubernetes locally on your laptop (using Minikube, Kind, or K3s).&lt;/li&gt;
&lt;li&gt;Use it in the cloud (AWS, Google Cloud, Azure all provide managed Kubernetes).&lt;/li&gt;
&lt;li&gt;Or even run it at the edge.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;7. Highly extensible&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes has many built-in features, but you can extend it.&lt;/li&gt;
&lt;li&gt;You can create custom objects, controllers, or operators to support your unique use cases.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With all these features, Kubernetes is suitable for almost any situation where you want to run containers reliably.&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠 How Does Kubernetes Work?
&lt;/h2&gt;

&lt;p&gt;Kubernetes often feels complex because it has many different parts working together. But once you understand the basics and how these pieces fit, getting started becomes much easier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pjagpdngtfbkoon5thi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pjagpdngtfbkoon5thi.png" alt="Diagram showing Kubernetes cluster with nodes and pods" width="765" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Kubernetes, the whole setup is called a &lt;a href="https://kubernetes.io/docs/concepts/architecture/" rel="noopener noreferrer"&gt;cluster&lt;/a&gt;. A cluster is made up of &lt;a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="noopener noreferrer"&gt;nodes&lt;/a&gt;, which are just machines that run your containers. These machines can be physical servers or virtual machines.&lt;/p&gt;

&lt;p&gt;Every cluster has two main parts: the &lt;a href="https://kubernetes.io/docs/concepts/overview/components/" rel="noopener noreferrer"&gt;control plane&lt;/a&gt; and the nodes. The control plane is like the brain of the system — it manages the cluster, schedules new containers on the nodes, and provides the API server you use to interact with Kubernetes. To increase reliability, a cluster can also run with multiple control plane instances so it keeps working even if one fails.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let’s look at the important components inside Kubernetes:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;kube-apiserver&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;This is the part of the control plane that runs the API server. It’s the only way to interact with a running Kubernetes cluster. You can issue commands to the API server using the &lt;a href="https://kubernetes.io/docs/reference/kubectl/" rel="noopener noreferrer"&gt;Kubectl CLI&lt;/a&gt;) or an HTTP client.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;kube-controller-manager&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;The &lt;a href="https://kubernetes.io/docs/concepts/architecture/controller/" rel="noopener noreferrer"&gt;controller&lt;/a&gt; manager starts and runs Kubernetes’ built-in controllers. A controller is essentially an event loop that applies actions after changes in your cluster. They create, scale, and delete objects in response to events such as an API request or increased load.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;kube-scheduler&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;The scheduler assigns new Pods (containers) onto the nodes in your cluster. It establishes which nodes can fulfill the Pod’s requirements, then selects the most optimal placement to maximize performance and reliability.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;kubelet&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Kubelet is a worker process that runs on each of your nodes. It maintains communication with the Kubernetes control plane to receive its instructions. Kubelet is responsible for pulling container images and starting containers in response to scheduling requests.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;kube-proxy&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Proxy is another component found on individual nodes. It configures the host’s networking system so traffic can reach the services in your cluster.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;And then there’s &lt;code&gt;kubectl&lt;/code&gt; — the command-line tool you’ll use to interact with Kubernetes. With it, you can deploy apps, check logs, scale workloads, and much more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🎯 Wrapping Up&lt;/strong&gt;&lt;br&gt;
Kubernetes may seem complex, but once you understand the basics, it becomes an incredibly powerful tool for managing applications.&lt;/p&gt;

&lt;p&gt;This was just the start! In the next part, we’ll go beyond theory and actually get hands-on with Kubernetes.&lt;/p&gt;

&lt;p&gt;In the next part of this &lt;strong&gt;Kubernetes 102&lt;/strong&gt;, we’ll dive into:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installing and setting up Kubernetes step by step
&lt;/li&gt;
&lt;li&gt;Core concepts like &lt;strong&gt;Node, Pod, Namespace, ReplicaSet,&lt;/strong&gt; and Deployment
&lt;/li&gt;
&lt;li&gt;Using &lt;code&gt;kubectl&lt;/code&gt; to interact with your cluster
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 Follow me &lt;a href="https://dev.to/alikhere"&gt;here&lt;/a&gt;, or connect with me on &lt;a href="https://www.linkedin.com/in/alikhurshidhere/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; and &lt;a href="https://x.com/alikhurshidhere" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; for updates, tips, and more developer content.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloudnative</category>
      <category>programming</category>
    </item>
    <item>
      <title>Breaking Into Open Source: My Transformative Journey with Microcks and LFX Mentorship</title>
      <dc:creator>Md Khurshid </dc:creator>
      <pubDate>Tue, 03 Jun 2025 18:28:47 +0000</pubDate>
      <link>https://forem.com/alikhere/my-lfx-mentorship-journey-with-cncf-microcks-deploying-microcks-on-cloud-kubernetes-platforms-47pf</link>
      <guid>https://forem.com/alikhere/my-lfx-mentorship-journey-with-cncf-microcks-deploying-microcks-on-cloud-kubernetes-platforms-47pf</guid>
      <description>&lt;p&gt;I’ve always been curious and passionate about open source. The concept fascinated me: people from any part of the world contributing to real-world projects, gaining hands-on, industry-level experience, and being mentored by the project maintainers themselves.&lt;/p&gt;

&lt;p&gt;Thanks to &lt;a href="https://www.linkedin.com/in/kunal-kushwaha/" rel="noopener noreferrer"&gt;Kunal Kushwaha&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/kirat-li/" rel="noopener noreferrer"&gt;Harkirat Singh&lt;/a&gt;, from whom I learned a lot about open source, my interest only grew stronger. Kunal, a former LFX mentee, shared his journey and experiences—which truly resonated with me. I knew I wanted to be part of something like that.&lt;br&gt;
Earlier, my academic workload didn’t leave me much time, but after my semester exams, I finally got the chance to explore the &lt;a href="https://lfx.linuxfoundation.org/tools/mentorship/" rel="noopener noreferrer"&gt;LFX Mentorship portal&lt;/a&gt;—where one project instantly caught my eye: CNCF - Microcks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I Chose &lt;a href="https://github.com/microcks" rel="noopener noreferrer"&gt;Microcks&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What really made me choose Microcks was the friendly and supportive community—especially how active and helpful the mentors were on Discord.&lt;/p&gt;

&lt;p&gt;Also, the project “Building Community-Driven Documentation for Deploying Microcks in Cloud Production Environments” felt like a great match for me. I already had some experience with AWS, GCP, and Docker, but I had never worked with Kubernetes before. This project seemed like the perfect chance to learn something new and in-demand—cloud-native deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Involved (Even Before Selection!)
&lt;/h2&gt;

&lt;p&gt;I only discovered the project a week before the deadline—February 18, 2025—but I didn’t let that stop me. In just a few days, I:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Joined the Microcks Discord server&lt;/li&gt;
&lt;li&gt;Got my first PR merged (a guide for installing hub.microcks.io locally)&lt;/li&gt;
&lt;li&gt;Actively helped other contributors navigate the project setup — also got a PR merged in the CNCF Kubestellar project&lt;/li&gt;
&lt;li&gt;Shared everything I learned on Twitter to help grow awareness&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This shows that you don’t always have to contribute to the codebase to make an impact. You can get involved through documentation, stay active on Discord or Slack, and support the community in meaningful ways. I submitted cover letters and resume for the projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  🎉 The Selection Day Surprise
&lt;/h2&gt;

&lt;p&gt;I still remember staying up until 4 AM fixing broken links and polishing the docs. The next morning, I woke up and checked my email — and my heart skipped a beat.&lt;br&gt;
“Congratulations! You were accepted to CNCF - Microcks: Community-Driven Docs for Deploying Microcks in Cloud Production.”&lt;br&gt;
I read it twice  to believe it. I was so happy and excited! I shared the news with my friends and couldn’t wait to get started.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvm7k7vtixso4l4k6xuzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvm7k7vtixso4l4k6xuzv.png" alt=" " width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🗓️ Mentorship Kickoff
&lt;/h2&gt;

&lt;p&gt;The program officially started on March 3rd. During the onboarding, we had a CNCF-wide session hosted by &lt;a href="https://www.cncf.io/people/staff/?p=nate-waddington" rel="noopener noreferrer"&gt;Nate Waddington&lt;/a&gt;, Head of CNCF Mentorship &amp;amp; Documentation. He shared stories, set expectations, and made us all feel welcomed and excited.&lt;/p&gt;

&lt;p&gt;Our first Microcks mentee meeting with &lt;a href="https://github.com/yada" rel="noopener noreferrer"&gt;Yacine Kheddache&lt;/a&gt; and &lt;a href="https://github.com/lbroudoux" rel="noopener noreferrer"&gt;Laurent Broudoux&lt;/a&gt; outlined the mentorship structure, expectations, and goals. I was assigned Yacine as my mentor for weekly 1-on-1 calls. These calls were essential in keeping my work focused, solving blockers quickly, and receiving continuous feedback.&lt;/p&gt;

&lt;p&gt;We used a shared Google Doc to plan and track weekly goals and updates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3zkz4oxlw1wby0x4ggn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3zkz4oxlw1wby0x4ggn.png" alt=" " width="701" height="795"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Three-Month Milestone — What I Accomplished
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Month 1: Laying the Foundation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Organized documentation folders under installation/ to clearly separate guides for AWS, GCP, Azure, OVH, Oracle, and Scaleway.&lt;/li&gt;
&lt;li&gt;Deployed an external Keycloak on GKE, integrated with Cloud SQL (PostgreSQL) using secure IAM roles and VPC peering. This setup followed GCP production best practices.&lt;/li&gt;
&lt;li&gt;Integrated Google Firestore as a NoSQL backend, making the setup fully cloud-native and scalable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Month 2: Strengthening Deployments&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enabled Kafka support in Microcks using Strimzi Operator, which simplified asynchronous protocol testing.&lt;/li&gt;
&lt;li&gt;Implemented TLS certificates with cert-manager and Let's Encrypt, ensuring automatic HTTPS provisioning and enhanced security.&lt;/li&gt;
&lt;li&gt;Wrote a comprehensive Troubleshooting Guide that addressed common errors in external Keycloak and MongoDB deployments—this helped reduce user onboarding friction.&lt;/li&gt;
&lt;li&gt;Authored a Cloud-Agnostic Common Guidelines doc that outlined infrastructure patterns, IAM setups, database connections, and Helm configurations for different cloud providers.&lt;/li&gt;
&lt;li&gt;Presented my work to the Microcks community, gathered feedback, and welcomed suggestions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Month 3: Deep Dive into Azure (AKS)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deployed Microcks on Azure Kubernetes Service (AKS).&lt;/li&gt;
&lt;li&gt;Set up an external Keycloak backed by Azure Database for PostgreSQL.&lt;/li&gt;
&lt;li&gt;Configured a managed MongoDB instance on AKS and connected it to Microcks.&lt;/li&gt;
&lt;li&gt;Integrated Azure Active Directory (AAD) for secure AKS access and authentication.&lt;/li&gt;
&lt;li&gt;Documented all the above in a detailed, step-by-step guide covering CLI commands, YAML manifests, and Helm chart values for real-world production setups.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Challenges Faced &amp;amp; How I Overcame Them
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;IAM Permissions on GCP&lt;/strong&gt;&lt;br&gt;
Setting up the right permissions for services like Cloud SQL, GKE, and Keycloak was tricky. I had to strike a balance between security and functionality, learning how to handle permission errors, scoped tokens, and workload identity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Helm Chart Customization&lt;/strong&gt;&lt;br&gt;
The default Helm charts are built for internal Keycloak and MongoDB. I needed to override multiple values to use external services, which taught me a lot about Helm templating and real-world deployment tweaks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stateful Workloads in Kubernetes&lt;/strong&gt;&lt;br&gt;
Deploying databases like MongoDB or PostgreSQL required understanding persistent storage, stateful sets, and backup strategies—especially on AKS. I spent time digging into best practices to manage these safely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learning Kubernetes from Scratch&lt;/strong&gt;&lt;br&gt;
This was my first real dive into Kubernetes. Terms like pods, ingress, and config maps were confusing at first, but thanks to my mentor’s support and hands-on work, I picked it up step by step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqd1n96y3wvr8z5v5la9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqd1n96y3wvr8z5v5la9.png" alt=" " width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🙌 Wrapping Up
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;During this mentorship:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I got 11 PRs merged into the official Microcks documentation repo&lt;/li&gt;
&lt;li&gt;Promoted Microcks and my contributions on social media, especially Twitter and LinkedIn, sharing deployment tips, learnings, and threads&lt;/li&gt;
&lt;li&gt;Helped onboard new contributors by responding on Discord and pointing them to helpful docs and resources&lt;/li&gt;
&lt;li&gt;Actively encouraged others to explore the LFX program and join the open-source community&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3bmhej81s0r9fzehaof.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3bmhej81s0r9fzehaof.png" alt=" " width="800" height="443"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/microcks/community/pulls?q=is%3Apr+author%3Aalikhere" rel="noopener noreferrer"&gt;Merged PR's&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On June 12, during the final Microcks community meeting, we mentees, shared our experiences. It was a beautiful moment to reflect on our growth and conclude the mentorship with pride.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jwpy1odqwdwrkmzp585.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jwpy1odqwdwrkmzp585.png" alt=" " width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  💬 Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This was my first open source contribution—and first time being part of a global mentorship program. The experience was transformative.&lt;/p&gt;

&lt;p&gt;I'm deeply thankful to my mentors &lt;a href="https://github.com/yada" rel="noopener noreferrer"&gt;Yacine Kheddache&lt;/a&gt; and &lt;a href="https://github.com/lbroudoux" rel="noopener noreferrer"&gt;Laurent Broudoux&lt;/a&gt; for their support, technical guidance, and continuous encouragement. Without them, this journey wouldn’t have been the same.&lt;br&gt;
Although the program has officially ended, my journey with Microcks continues—and I look forward to contributing more in the future. &lt;/p&gt;

&lt;p&gt;If you’re thinking about applying for LFX—do it. You don’t have to be an expert. If you're passionate, willing to learn, and ready to contribute, the community will welcome you with open arms.&lt;br&gt;
You can check out all my contributions on the &lt;a href="https://github.com/orgs/microcks/repositories" rel="noopener noreferrer"&gt;Microcks repository&lt;/a&gt;, and I’m always happy to help anyone get started!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;💙 Thanks for reading!&lt;/strong&gt;&lt;br&gt;
If you're passionate about open source and cloud-native technologies, the next &lt;a href="https://lfx.linuxfoundation.org/tools/mentorship/" rel="noopener noreferrer"&gt;LFX mentorship&lt;/a&gt; batch is a great chance to gain hands-on experience, collaborate with communities, and grow professionally.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/alikhurshidhere/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt; | &lt;a href="https://github.com/alikhere" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;br&gt;
LFX’25 Mentee at &lt;a href="https://microcks.io/" rel="noopener noreferrer"&gt;Microcks&lt;/a&gt;&lt;/p&gt;

</description>
      <category>lfx</category>
      <category>devops</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>My Second Month as an LFX Mentee: Advancing Microcks Deployments</title>
      <dc:creator>Md Khurshid </dc:creator>
      <pubDate>Sun, 11 May 2025 07:07:58 +0000</pubDate>
      <link>https://forem.com/alikhere/my-second-month-as-an-lfx-mentee-advancing-microcks-deployments-bp1</link>
      <guid>https://forem.com/alikhere/my-second-month-as-an-lfx-mentee-advancing-microcks-deployments-bp1</guid>
      <description>&lt;p&gt;Hello everyone! It’s been another productive month in the LFX Mentorship program with the &lt;a href="https://www.cncf.io/" rel="noopener noreferrer"&gt;CNCF&lt;/a&gt; &lt;a href="https://www.microcks.io/" rel="noopener noreferrer"&gt;Microcks&lt;/a&gt; project. This month, I focused on enhancing Microcks deployments on Google Kubernetes Engine (GKE) and improving documentation for cloud-specific deployment strategies. Additionally, I had the opportunity to present my contributions during the community meeting, where I encouraged others to contribute, provide feedback, and review the documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Deploying Microcks with &lt;strong&gt;Asynchronous Options&lt;/strong&gt; on GKE
&lt;/h2&gt;

&lt;p&gt;I worked on improving the Microcks GKE deployment by adding support for asynchronous protocols and securing the Microcks endpoint with TLS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enabled &lt;strong&gt;Kafka-based async protocols&lt;/strong&gt; with Helm and installed Strimzi for Kafka management.&lt;/li&gt;
&lt;li&gt;Secured the endpoint using cert-manager and Let's Encrypt for automatic &lt;strong&gt;SSL certificate&lt;/strong&gt; provisioning.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. Troubleshooting Guide for Microcks &amp;amp; Keycloak
&lt;/h2&gt;

&lt;p&gt;Deploying complex systems like Microcks with external Keycloak and MongoDB can sometimes lead to roadblocks. To address this, I created a TROUBLESHOOTING.md file in the repository. This document provides a set of common issues and solutions that developers may encounter when deploying Microcks on GKE, especially when integrating with Keycloak for authentication and MongoDB for data storage.&lt;br&gt;
&lt;a href="https://github.com/microcks/community/blob/main/install/gcp/TROUBLESHOOTING.md" rel="noopener noreferrer"&gt;Troubleshooting Guide&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Creating Common Guidelines for Cloud Providers
&lt;/h2&gt;

&lt;p&gt;I developed a &lt;strong&gt;GUIDELINES.md&lt;/strong&gt; document that outlines the deployment process for Microcks across multiple cloud providers. The guidelines cover setting up infrastructure, deploying Keycloak, provisioning databases, and configuring Microcks with Helm.&lt;br&gt;
&lt;a href="https://github.com/microcks/community/blob/main/install/GUIDELINES.md" rel="noopener noreferrer"&gt;Common Guidelines&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🙌 Looking Ahead
&lt;/h2&gt;

&lt;p&gt;Next month, I plan to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy Microcks on Azure AKS and document the process.&lt;/li&gt;
&lt;li&gt;Add a Troubleshooting Guide for Azure deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Month two has been an exciting journey, contributing to Microcks with improvements in deployment, security, and cloud-specific guidelines. I’m excited to continue supporting the Microcks community and look forward to the next steps!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feel free to check out my contributions on the &lt;a href="https://github.com/microcks/community" rel="noopener noreferrer"&gt;Microcks community repository&lt;/a&gt;  -  feedback and contributions are always welcome!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;See you in the next update! 👋&lt;/p&gt;

</description>
      <category>lfx</category>
      <category>devops</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>My First Month as an LFX Mentee</title>
      <dc:creator>Md Khurshid </dc:creator>
      <pubDate>Wed, 09 Apr 2025 07:10:19 +0000</pubDate>
      <link>https://forem.com/alikhere/my-first-month-as-an-lfx-mentee-l6c</link>
      <guid>https://forem.com/alikhere/my-first-month-as-an-lfx-mentee-l6c</guid>
      <description>&lt;h2&gt;
  
  
  About the CNCF Microcks Project
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.microcks.io/" rel="noopener noreferrer"&gt;Microcks&lt;/a&gt; is a cloud-native tool for mocking and testing APIs (REST, SOAP, and more). As part of the LFX Mentorship program under &lt;a href="https://www.cncf.io/" rel="noopener noreferrer"&gt;CNCF&lt;/a&gt;, my project aims to build a centralized repository of real-world, production-grade deployment strategies for Microcks across various cloud platforms including AWS, GCP, Azure, OVH, Oracle and Scaleway. The goal is to help adopters confidently deploy Microcks in production environments by learning from shared experiences and expertise.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Microcks and Exploring Documentation
&lt;/h2&gt;

&lt;p&gt;In the first week, most of my time was spent exploring Microcks documentation and understanding the internal architecture of Microcks. This foundational phase was essential to gain a solid grasp of the project and identify key areas for improvement. Understanding how MongoDB, Keycloak, and Kafka work within the Microcks ecosystem allowed me to dive into deployment options and optimize the setup for cloud environments.&lt;br&gt;
&lt;a href="https://microcks.io/documentation/" rel="noopener noreferrer"&gt;Checkout Documentation&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  My Contributions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Organizing Cloud Deployments&lt;/strong&gt;&lt;br&gt;
One of my early contributions was creating a centralized folder structure under the installation/ directory of the documentation. This new structure categorizes deployment guides for each cloud provider, making it easy for users to find platform-specific instructions. Whether you're deploying on AWS, GCP, or Azure, everything is now neatly organized and easily accessible.&lt;br&gt;
&lt;a href="https://github.com/microcks/community/tree/main/install" rel="noopener noreferrer"&gt;View folder structure&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Deploying External Keycloak on GKE Using Google Cloud SQL&lt;/strong&gt;&lt;br&gt;
Microcks uses Keycloak for authentication, and I was tasked with setting up an external Keycloak instance on Google Kubernetes Engine (GKE). I integrated Cloud SQL (PostgreSQL) as the backend for Keycloak to provide a robust, scalable, production-ready authentication system. &lt;br&gt;
The process involved setting up GCP authentication, service accounts, IAM roles, and VPC peering for secure connectivity between GKE and Cloud SQL. I used Helm to deploy Keycloak and configured DNS via nip.io to expose it securely. I documented the entire process to make it easier for others to follow.&lt;br&gt;
&lt;a href="https://github.com/microcks/community/blob/main/install/gcp/keycloak-installation.md" rel="noopener noreferrer"&gt;External Keycloak on GKE guide&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Deploying Microcks on GKE with External Keycloak&lt;/strong&gt;&lt;br&gt;
Once Keycloak was deployed, the next step was to deploy Microcks on GKE. Instead of using MongoDB, I connected Microcks to Firestore, Google Cloud's managed NoSQL database, to align with GCP-native services. I used Helm to deploy Microcks, integrated it with the external Keycloak for authentication, and configured it to use External MangoDB for data storage. This deployment setup ensures scalability and simplifies management, showcasing a production-grade configuration using GCP services. I also documented this deployment to guide others in setting up similar environments.&lt;br&gt;
&lt;a href="https://github.com/microcks/community/tree/main/install/gcp" rel="noopener noreferrer"&gt;Microcks on GKE guide&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges Faced
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Permission Configuration:&lt;/strong&gt; One of the main hurdles was ensuring that the IAM user/service account had the correct permissions for deploying Microcks and related services on GKE. It was essential to grant the minimum required permissions for security and functionality, which required careful attention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overriding Helm Chart Values:&lt;/strong&gt; Another challenge was customizing the Helm chart to integrate external Keycloak and Cloud SQL instead of Microcks’ default MongoDB and internal Keycloak. This required modifying the chart values and ensuring everything worked smoothly with the GCP-native services.&lt;/p&gt;

&lt;h2&gt;
  
  
  🙌 Looking Ahead
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Deploy Microcks with asynchronous options on GKE&lt;/li&gt;
&lt;li&gt;Add a Troubleshooting Guide for Microcks &amp;amp; Keycloak&lt;/li&gt;
&lt;li&gt;Create comman GUIDELINES.md for all cloud providers&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;It's been an exciting first month contributing to the Microcks community, and I'm eager to continue helping develop cloud deployment strategies. My goal is to empower users to confidently deploy Microcks in various cloud environments using best practices. Feel free to check out my contributions on the &lt;a href="https://github.com/microcks/community" rel="noopener noreferrer"&gt;Microcks community repository&lt;/a&gt; - feedback and contributions are always welcome!&lt;/p&gt;

&lt;p&gt;That's all for my first month as an LFX Mentee. See you in the next update! 👋&lt;/p&gt;

</description>
      <category>lfx</category>
      <category>devops</category>
      <category>programming</category>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
