In the last part, we built our Kubernetes playground. We now have a local cluster running and the kubectl
command-line tool ready to issue commands. The stage is set, the lights are dimmed, and it's time for the main performance.
This is the moment we go from theory to practice. We will deploy our first application, see it run, and witness the "magic" of Kubernetes in action. This is the "Hello, World!" of the container orchestration world.
Your Scepter: kubectl
kubectl
is your primary tool for interacting with the Kubernetes "kingdom." Think of it as your royal scepter; you use it to issue commands to the cluster's control plane. The control plane then works to make your commands a reality.
The basic structure of most kubectl
commands is:
kubectl <command> <resource-type> [resource-name]
For example, kubectl get pods
or kubectl delete deployment my-app
. Let's put it to use.
Deploying Our First Application
We'll start by deploying Nginx, a very popular and lightweight web server. It's a perfect candidate for a first application because it's a single, self-contained container that's ready to run.
Open your terminal and run the following command:
kubectl create deployment hello-nginx --image=nginx
Let's break down what you just commanded the cluster to do:
-
kubectl create deployment
: You told Kubernetes you want to create aDeployment
object. As we learned in Part 2, this is the "blueprint" for our application that manages its Pods. -
hello-nginx
: This is the name you've given to your Deployment. -
--image=nginx
: This is the most important part. You specified that the containers in this deployment should be created from thenginx
container image. Kubernetes will pull this image from Docker Hub, a public container registry.
The cluster has received your command and is now working to achieve the desired state.
Verifying the Deployment
Did it work? Let's ask Kubernetes what's happening.
First, let's see if the Deployment itself was created successfully.
kubectl get deployments
You should see an output similar to this:
NAME READY UP-TO-DATE AVAILABLE AGE
hello-nginx 1/1 1 1 15s
This tells us that the hello-nginx
deployment exists. The READY
column shows 1/1
, which means our desired number of Pods (1) is running and ready.
Now, let's look at the Pod that the Deployment created for us.
kubectl get pods
The output will look something like this:
NAME READY STATUS RESTARTS AGE
hello-nginx-55649fd788-z2d2q 1/1 Running 0 45s
Here's what this means:
-
NAME
: Kubernetes generated a unique name for the Pod by taking the Deployment's name (hello-nginx
) and adding a random hash (55649fd788-z2d2q
). This ensures every Pod has a unique identity. -
READY
:1/1
indicates that one container inside the Pod is running and ready. -
STATUS
:Running
is the state we want to see. The Pod is healthy and its container is active.
Congratulations! You have a web server running inside a container, which is running inside a Pod, which is managed by a Deployment, all inside your Kubernetes cluster.
Witnessing the Magic: Self-Healing
This is where Kubernetes starts to show its true power. Our Deployment declared a desired state: "there should always be one Pod running the Nginx image." Kubernetes will work tirelessly to enforce this state.
Let's simulate a disaster. We are going to manually delete the Pod and see what happens.
First, get the name of your Pod from the kubectl get pods
command. Then, use that name in the following command (your Pod name will be different!):
# Replace the name with the name of YOUR pod
kubectl delete pod hello-nginx-55649fd788-z2d2q
You'll see a confirmation: pod "hello-nginx-55649fd788-z2d2q" deleted
.
Did we just kill our application? Quickly, run the get pods
command again, maybe even a few times.
kubectl get pods
You will see something fascinating:
NAME READY STATUS RESTARTS AGE
hello-nginx-55649fd788-z2d2q 0/1 Terminating 0 80s
hello-nginx-55649fd788-abc12 1/1 Running 0 5s
The original Pod is Terminating
, but a brand new Pod with a different name has appeared and is already Running
!
This is self-healing in action. The Deployment's controller noticed that the number of Pods (zero) didn't match the desired number (one), so it immediately created a new one to fix the discrepancy. This is a fundamental feature that provides resilience to your applications.
The Missing Piece
Our Nginx server is running. It's resilient. But there's a problem: how do we access it? How can we open our web browser and see the Nginx welcome page?
By default, Pods are only accessible from inside the cluster via their internal IP address. We, on the outside, have no path to communicate with them. Our application is running in an isolated network.
To solve this, we need to create a stable network endpoint that exposes our application. We need a "street sign" for our Pod, and in Kubernetes, that's called a Service.
What's Next
We've successfully deployed our first application and witnessed the power of declarative state management and self-healing. But an application you can't access isn't very useful.
In the next part, we will solve this exact problem. We will dive into Kubernetes Services, learn how they provide stable network addresses, and finally expose our hello-nginx
application so we can access it from our browser.
Top comments (0)