<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: jacobcrawford</title>
    <description>The latest articles on Forem by jacobcrawford (@jacobcrawford).</description>
    <link>https://forem.com/jacobcrawford</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jacobcrawford"/>
    <language>en</language>
    <item>
      <title>Hello Python through Docker and Kubernetes</title>
      <dc:creator>jacobcrawford</dc:creator>
      <pubDate>Fri, 17 Sep 2021 11:18:17 +0000</pubDate>
      <link>https://forem.com/itminds/hello-python-through-docker-and-kubernetes-379d</link>
      <guid>https://forem.com/itminds/hello-python-through-docker-and-kubernetes-379d</guid>
      <description>&lt;p&gt;Going from an application to a containerized application can have many benefits. Today we are taking the step further and looking at how we can go from a small application all the way to deploying it on Kubernetes.&lt;/p&gt;

&lt;p&gt;We are going to take a small application, build a container image around it, write Kubernetes manifest and deploy the whole thing on Kubernetes. This is a practical guide, so I encourage you to follow along with your own application in your favorite programming language. Here we will use Python.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application
&lt;/h2&gt;

&lt;p&gt;We start out with a small Hello World python REST api in Flask.&lt;/p&gt;

&lt;p&gt;The following dependencies in Python are needed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;flask_restful&lt;/li&gt;
&lt;li&gt;flask&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;which can be installed with &lt;code&gt;pip3 install flask flask_restful&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Our application is a simple python script &lt;code&gt;hello-virtualization.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from flask import Flask
from flask_restful import Resource, Api

app = Flask(__name__)
api = Api(app)

class HelloWorld(Resource):
    def get(self):
        return "Hello Python!"

api.add_resource(HelloWorld, '/')

if __name__ == '__main__':
    app.run(host='0.0.0.0')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our app simply exposes an endpoint that returns "Hello Python!" and can be started with &lt;code&gt;python3 hello-virtualization.py&lt;/code&gt;. You should see the following in your terminal:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SWfmCKHJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7d2qublj3ztfbp2shiit.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SWfmCKHJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7d2qublj3ztfbp2shiit.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see that the server is listening on port 5000, and can confirm this by sending a request: &lt;code&gt;curl 10.0.0.11:5000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Great! We now have a very simple application that responds when we call it. Lets wrap this in a container image.&lt;/p&gt;
&lt;h2&gt;
  
  
  Docker
&lt;/h2&gt;

&lt;p&gt;First you need to get docker installed on your machine. If you are on Windows or MacOS, &lt;a href="https://www.docker.com/products/docker-desktop"&gt;Docker Desktop&lt;/a&gt; should do the trick. &lt;/p&gt;

&lt;p&gt;Next we will wrap the application in a docker image. To do this create a file called &lt;code&gt;Dockerfile&lt;/code&gt; in the same folder as your application. &lt;/p&gt;

&lt;p&gt;In the Dockerfile we will start by setting a base container image for our own container image. Since we need Python we will use &lt;code&gt;python:3.8-buster&lt;/code&gt; by typing: &lt;code&gt;FROM python:3.8-buster&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Next we need to include the packages needed for our application to run. We do this by writing &lt;code&gt;RUN pip3 install flask flask_restful&lt;/code&gt; just as when we fetched the packages for our local system, we now fetch the to the image.&lt;/p&gt;

&lt;p&gt;We the specify a working directory: &lt;code&gt;WORKDIR /app&lt;/code&gt; which is where all commands we run will be executed from. &lt;/p&gt;

&lt;p&gt;Copy the source code into the working directory: &lt;code&gt;COPY hello-virtualization.py /app&lt;/code&gt;, inform that the container should have port 5000 open with &lt;code&gt;EXPOSE 5000&lt;/code&gt; and finally tell which command will be executed when we run the container image: &lt;code&gt;CMD ["python3", "hello-virtualization.py"]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The final Dockerfile should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.8-buster

RUN pip3 install flask flask_restful

WORKDIR /app

COPY hello-virtualization.py /app

EXPOSE 5000

CMD ["python3", "hello-virtualization.py"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will then need to build the container image and give it the name &lt;code&gt;hello-virtualization&lt;/code&gt; for reference:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker build . -t hello-virtualization&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Will build our container image. To run the image we simply type:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run hello-virtualization&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The command will give an output very similar to when we executed the script using only python. Validate that the application works by curling the endpoint that is printed by the command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes
&lt;/h2&gt;

&lt;p&gt;If you use Docker Desktop, Kubernetes can be enabled through the UI. &lt;/p&gt;

&lt;p&gt;If you do not use Docker Desktop, there are multiple alternatives for setting up a Kubernetes cluster for development like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://minikube.sigs.k8s.io/docs/start/"&gt;minikube&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://microk8s.io/"&gt;mikrok8s&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kind.sigs.k8s.io/"&gt;KiND&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this demo all of them will work. &lt;/p&gt;

&lt;p&gt;Kubernetes needs to fetch the container image that we just build from a container registry. To make a solution that works for all the different versions, we will push the container image to a remote container registry. The easiest way to do this is to create an account on &lt;a href="https://hub.docker.com/"&gt;Docker Hub&lt;/a&gt; and authenticate by typing &lt;code&gt;docker login&lt;/code&gt; in your terminal. When this is done we need to retag the container image so that the Docker CLI knows that we want to push the image to a container registry owned by the user we just created.  &lt;/p&gt;

&lt;p&gt;To tag the container image:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker tag hello-virtualization:latest &amp;lt;DOCKER_HUB_USERNAME&amp;gt;/hello-virtualization:latest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To push the image to Docker Hub:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker push &amp;lt;DOCKER_HUB_USERNAME&amp;gt;/hello-virtualization:latest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now we are ready for some Kubernetes.&lt;/p&gt;

&lt;p&gt;Kubernetes resources are managed through manifest files written in Yaml. The smallest unit in Kubernetes is called a Pod, which is a wrapper around one or more containers. When you want to deploy a Pod in Kubernetes, you would often use another resource called a Deployment to manage the deployment of a Pod and the replication factor of the Pod. &lt;/p&gt;

&lt;p&gt;Create a new file called &lt;code&gt;hello-virtualization-deployment.yaml&lt;/code&gt; to create a Deployment for our app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-virtualization-deployment
  labels:
    app: hello-virtualization
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-virtualization-label
  template:
    metadata:
      labels:
        app: hello-virtualization-label
    spec:
      containers:
      - name: hello-virtualization-container
        image: &amp;lt;DOCKER_HUB_USERNAME&amp;gt;/hello-virtualization:latest
        ports:
        - containerPort: 5000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few things to notice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;replicas: 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells the Deployment that we want 1 instance of our container running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  selector:
    matchLabels:
      app: hello-virtualization-label
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tells the Deployment that it shall control the template with the label &lt;code&gt;app: hello-virtualization-label&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  template:
    metadata:
      labels:
        app: hello-virtualization-label
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Is where we define that a template and set the label that binds this template to the Deployment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;template:
    ...
    spec:
      containers:
      - name: hello-virtualization-container
        image: &amp;lt;DOCKER_HUB_USERNAME&amp;gt;/hello-virtualization:latest
        ports:
        - containerPort: 5000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We configure that this template is using the container image that we build and that the container listens on 5000 when created.&lt;/p&gt;

&lt;p&gt;To deploy this on the Kubernetes we use the tool &lt;code&gt;kubectl&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f hello-virtualization-deployment.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should be able to see the Pod being created by typing:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get pods&lt;/code&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LrBh5UXi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bybqsgu9kpliuaqdxdik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LrBh5UXi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bybqsgu9kpliuaqdxdik.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes needs to pull the image and start the container in the Pod, but after a few seconds the pod should be have a Running status. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0-MFe971--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1pzku1j5z21zwcc53t5l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0-MFe971--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1pzku1j5z21zwcc53t5l.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Pod has an IP address within the Kubernetes cluster, that we can use to test that the application works. To get the IP address type &lt;code&gt;kubectl get pods -o wide&lt;/code&gt;: &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5a5xz91i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bgtvfsb0n716eliq4qj3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5a5xz91i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bgtvfsb0n716eliq4qj3.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since this IP address is only reachable from within the Kubernetes cluster we need to jump into a Pod:&lt;br&gt;
&lt;code&gt;kubectl run my-shell --rm -i --tty --image curlimages/curl -- sh&lt;/code&gt;&lt;br&gt;
Now we can curl the IP address and see that the application works.&lt;/p&gt;

&lt;p&gt;This is not a stable solution for many reasons. First the IP of the Pod is short lived, meaning that the Pod with the application will not have the same IP every time we start it. Try to kill the Pod with &lt;code&gt;kubectl delete pod &amp;lt;POD_NAME&amp;gt;&lt;/code&gt;. The pods name is listed when you type &lt;code&gt;kubectl get pods&lt;/code&gt;.&lt;br&gt;
If you type &lt;code&gt;kubectl get pods -o wide&lt;/code&gt; after having deleted the pod you will se that there is a pod running with a similar name and a new IP address. This happens because we have told Kubernetes that we wanted 1 replica of the Pod, and therefor Kubernetes will make sure that there is always 1 Pod alive with our application. If our application crashed, Kubernetes will simply deploy a new one! Though the IP will not be the same so we need to be able to contact our application another way. &lt;/p&gt;

&lt;p&gt;Introducing Kubernetes Services:&lt;/p&gt;

&lt;p&gt;Create a file &lt;code&gt;hello-virtualization-service.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: hello-virtualization-service
  labels:
    app: hello-virtualization-label
spec: 
  ports:
  - port: 5000
    targetPort: 5000
    protocol: TCP
  selector:
    app: hello-virtualization-label
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few things to notice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spec: 
  ports:
  - port: 5000
    targetPort: 5000
    protocol: TCP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tells our service that when we contact it on port 5000 it should forward the communication to port 5000 on the target using TCP.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spec: 
  ...
  selector:
    app: hello-virtualization-label
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Informs the service that the target is actually the Pods with the label &lt;code&gt;app: hello-virtualization-label&lt;/code&gt;, just like we did in the Deployment. Labels and label selectors is how we bind resources in Kubernetes.&lt;/p&gt;

&lt;p&gt;We deploy the service just as we did the Deplyment: &lt;code&gt;kubectl apply -f hello-virtualization-service.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To see that the service was deployed type: &lt;code&gt;kubectl get svc&lt;/code&gt;:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7WUGAcdy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mkrk6krvi6gkl8uckhc6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7WUGAcdy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mkrk6krvi6gkl8uckhc6.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We then test that the service work by trying to contact our application through the service.&lt;/p&gt;

&lt;p&gt;Once again jump into a pod on the cluster:&lt;br&gt;
&lt;code&gt;kubectl run my-shell --rm -i --tty --image curlimages/curl -- sh&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This time we will not use the Pods IP, but the name of the service to contact our application: &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q4gBb_z3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p7ayop2lj9occumvjs1f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q4gBb_z3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p7ayop2lj9occumvjs1f.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Inside the Kubernetes cluster you can always use the names of services to communicate between Pods. &lt;/p&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>virtualization</category>
    </item>
    <item>
      <title>High availability Kubernetes cluster on bare metal - part 2</title>
      <dc:creator>jacobcrawford</dc:creator>
      <pubDate>Fri, 26 Mar 2021 11:07:58 +0000</pubDate>
      <link>https://forem.com/itminds/high-availability-kubernetes-cluster-on-bare-metal-part-2-46j9</link>
      <guid>https://forem.com/itminds/high-availability-kubernetes-cluster-on-bare-metal-part-2-46j9</guid>
      <description>&lt;p&gt;Last week we covered the theory of high availability in a bare-metal Kubernetes cluster, which means that this week is where the magic happens.&lt;/p&gt;

&lt;p&gt;First of all, there are a few dependencies that you need to have installed to initialize a Kubernetes cluster. Since this is not a guide on how to set up Kubernetes, I will assume that you have already done this before, and if not you can use the same guide as I used when installing Kubernetes for the first time: &lt;a href="https://computingforgeeks.com/how-to-setup-3-node-kubernetes-cluster-on-ubuntu-18-04-with-weave-net-cni/"&gt;guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Also, if you did not follow the guide and have already installed Kubernetes and Docker (or your favorite container runtime), you will also have installed a key Kubernetes toolbox  &lt;strong&gt;kubeadm&lt;/strong&gt;, which is what we will use to initialize the cluster. First, we need to deal with the problems of high availability, which we discussed last week.&lt;/p&gt;

&lt;h3&gt;
  
  
  The stable control plane IP
&lt;/h3&gt;

&lt;p&gt;As mentioned, we will use a self-hosted solution where we set up a stable IP with &lt;strong&gt;HAProxy&lt;/strong&gt; and &lt;strong&gt;Keepalived&lt;/strong&gt; as pods inside the Kubernetes cluster. To achieve this, we will need to configure a few files for each master node:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A keepalived configuration.&lt;/li&gt;
&lt;li&gt;A keepalived health check script.&lt;/li&gt;
&lt;li&gt;A manifest file for the keepalived static pod.&lt;/li&gt;
&lt;li&gt;A HAproxy configuration file.&lt;/li&gt;
&lt;li&gt;A manifest file for the HAProxy static pod.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Keepalived:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script check_apiserver {
  script "/etc/keepalived/check_apiserver.sh"
  interval 3
  weight -2
  fall 10
  rise 2
}

vrrp_instance VI_1 {
    state ${STATE}
    interface ${INTERFACE}
    virtual_router_id ${ROUTER_ID}
    priority ${PRIORITY}
    authentication {
        auth_type PASS
        auth_pass ${AUTH_PASS}
    }
    virtual_ipaddress {
        ${APISERVER_VIP}
    }
    track_script {
        check_apiserver
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have some placeholders in bash that we need to fill out manually or through scripting:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;STATE&lt;/code&gt; Will be MASTER for the node initializing the cluster because it will also be the first one to host the virtual IP address of the control plane. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;INTERFACE&lt;/code&gt; Is the network interface of the network where the nodes will communicate. For Ethernet connections, this is often &lt;code&gt;eth0&lt;/code&gt;, and can be found with the command &lt;code&gt;ifconfig&lt;/code&gt; on most Linux operating systems. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ROUTER_ID&lt;/code&gt; Needs to be the same for all the hosts. Often set to &lt;code&gt;51&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PRIORITY&lt;/code&gt; A unique number that decides which node should host the virtual IP of the control plane in case the first MASTER node goes down. Often set to 100 for the node initializing the cluster, and then decreasing values for the rest. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AUTH_PASS&lt;/code&gt; should be the same for all nodes. Often set to &lt;code&gt;42&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;APISERVER_VIP&lt;/code&gt; The virtual IP for the control plane. This will be created. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For the health check script we have the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/sh

errorExit() {
    echo "*** $*" 1&amp;gt;&amp;amp;2
    exit 1
}

curl --silent --max-time 2 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/"
if ip addr | grep -q ${APISERVER_VIP}; then
    curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"
fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We see the &lt;code&gt;APISERVER_VIP&lt;/code&gt; placeholder again, which is just the same as before. If some variables are repeated I will not repeat the explanation, which means that the only new variable is:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;APISERVER_DEST_PORT&lt;/code&gt;, which is the front end port on the virtual IP for the API server. This can be any unused port e.g. 4200.&lt;/p&gt;

&lt;p&gt;Last, the manifest file for Keepalived:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  name: keepalived
  namespace: kube-system
spec:
  containers:
  - image: osixia/keepalived:1.3.5-1
    name: keepalived
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
        - NET_BROADCAST
        - NET_RAW
    volumeMounts:
    - mountPath: /usr/local/etc/keepalived/keepalived.conf
      name: config
    - mountPath: /etc/keepalived/check_apiserver.sh
      name: check
  hostNetwork: true
  volumes:
  - hostPath:
      path: /etc/keepalived/keepalived.conf
    name: config
  - hostPath:
      path: /etc/keepalived/check_apiserver.sh
    name: check
status: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a pod that uses the two configuration files.&lt;/p&gt;

&lt;h4&gt;
  
  
  HAProxy
&lt;/h4&gt;

&lt;p&gt;We have one configuration file for the HAProxy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    log /dev/log local0
    log /dev/log local1 notice
    daemon

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 1
    timeout http-request    10s
    timeout queue           20s
    timeout connect         5s
    timeout client          20s
    timeout server          20s
    timeout http-keep-alive 10s
    timeout check           10s

#---------------------------------------------------------------------
# apiserver frontend which proxys to the masters
#---------------------------------------------------------------------
frontend apiserver
    bind *:${APISERVER_DEST_PORT}
    mode tcp
    option tcplog
    default_backend apiserver

#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
    option httpchk GET /healthz
    http-check expect status 200
    mode tcp
    option ssl-hello-chk
    balance     roundrobin
        server ${HOST1_ID} ${HOST1_ADDRESS}:${APISERVER_SRC_PORT} check
        server ${HOST2_ID} ${HOST2_ADDRESS}:${APISERVER_SRC_PORT} check
        server ${HOST3_ID} ${HOST3_ADDRESS}:${APISERVER_SRC_PORT} check
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we plug in the control plane IPs. Assuming a 3 node cluster we input a symbolic &lt;code&gt;HOST_ID&lt;/code&gt;, which is just a unique name, for each as well as the &lt;code&gt;HOST_ADDRESS&lt;/code&gt;. The APISERVER_SRC_PORT is by default port 6443, where the apiserver listens for traffic. &lt;/p&gt;

&lt;p&gt;The last file is the HAProxy manifest file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: haproxy
  namespace: kube-system
spec:
  containers:
  - image: haproxy:2.1.4
    name: haproxy
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: localhost
        path: /healthz
        port: ${APISERVER_DEST_PORT}
        scheme: HTTPS
    volumeMounts:
    - mountPath: /usr/local/etc/haproxy/haproxy.cfg
      name: haproxyconf
      readOnly: true
  hostNetwork: true
  volumes:
  - hostPath:
      path: /etc/haproxy/haproxy.cfg
      type: FileOrCreate
    name: haproxyconf
status: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is all we actually need to configure to get a cluster up and running. Some of these are constants that need to be the same for all three master nodes, and some need to vary between nodes. Some are just values you have to input and for some values, you have to make a decision.&lt;/p&gt;

&lt;h3&gt;
  
  
  Values sanity check
&lt;/h3&gt;

&lt;p&gt;Let us just take a quick sanity check over the variables and what they are by default for each node.&lt;/p&gt;

&lt;h4&gt;
  
  
  Constants
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;ROUTER_ID=51&lt;/code&gt;&lt;br&gt;
&lt;code&gt;AUTH_PASS=42&lt;/code&gt;&lt;br&gt;
&lt;code&gt;APISERVER_SRC_PORT=6443&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Variables to input
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;STATE&lt;/code&gt;&lt;br&gt;
MASTER for the node that initializes the cluster, BACKUP for the two others. &lt;br&gt;
&lt;code&gt;PRIORITY&lt;/code&gt;&lt;br&gt;
100 for the node that initializes the cluster, 99 and 98 for the two others.&lt;/p&gt;
&lt;h4&gt;
  
  
  Variables to retrieve
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;APISERVER_VIP&lt;/code&gt;&lt;br&gt;
An IP within your network subnet. If your node has IP 192.168.1.140, this could be 192.168.1.50.&lt;br&gt;
&lt;code&gt;APISERVER_DEST_PORT&lt;/code&gt;&lt;br&gt;
A port for your choosing. Must not conflict with other service ports. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;INTERFACE&lt;/code&gt;&lt;br&gt;
The network interface. Use &lt;code&gt;ifconfig&lt;/code&gt;  to find it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;HOSTX_ID&lt;/code&gt;&lt;br&gt;
Any unique name for each of the 3 master nodes.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;HOSTX_ADDRESS&lt;/code&gt;&lt;br&gt;
The ip addresses of your machines. Can also be found with &lt;code&gt;ifconfig&lt;/code&gt; on each machine.&lt;/p&gt;
&lt;h4&gt;
  
  
  Files
&lt;/h4&gt;

&lt;p&gt;Now that the files are configured they should be but in the right destination so that &lt;code&gt;kubeadm&lt;/code&gt; can find them when the cluster is initializing. &lt;/p&gt;

&lt;p&gt;The absolute file paths are:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/etc/keepalived/check_apiserver.sh
/etc/keepalived/keepalived.conf
/etc/haproxy/haproxy.cfg

/etc/kubernetes/manifests/keepalived.yaml
/etc/kubernetes/manifests/haproxy.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Putting manifest files into &lt;code&gt;/etc/kubernetes/manifests/&lt;/code&gt; is what does the magic here. Everything in this folder will be applied when the cluster initializes. Even the control plane pods that are generated by &lt;code&gt;kubeadm&lt;/code&gt; will be put in here before the cluster initializes. &lt;/p&gt;

&lt;h3&gt;
  
  
  Initializing the cluster
&lt;/h3&gt;

&lt;p&gt;When the files are in place, initializing the cluster is as simple as running the &lt;code&gt;kubeadm init&lt;/code&gt; command with a few extra pieces of information.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubeadm init --control-plane-endpoint APISERVER_VIP:APISERVER_DEST_PORT --upload-certs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Will do the trick. The extra arguments tell the cluster that the control plane should not be contacted on the actual nodes IP, but on the virtual IP address. When the other nodes join, this is what makes the cluster highly available. If the node that is currently hosting the virtual IP goes down, the virtual IP will just jump to another available master node.&lt;/p&gt;

&lt;p&gt;Last, join the other two nodes to the cluster with the join command output by &lt;code&gt;kubeadm init&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;If this even peaked your interest a little bit, you are in for a treat. The whole manual process is being eliminated in an open-source project &lt;a href="https://github.com/distributed-technologies/mukube-configurator"&gt;right here&lt;/a&gt;. It is still a work in progress, but feel free to drop in and join the discussion.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>distributedsystems</category>
      <category>architecture</category>
    </item>
    <item>
      <title>High availability Kubernetes cluster on bare metal </title>
      <dc:creator>jacobcrawford</dc:creator>
      <pubDate>Fri, 19 Mar 2021 12:33:33 +0000</pubDate>
      <link>https://forem.com/itminds/high-availability-kubernetes-cluster-on-bare-metal-aip</link>
      <guid>https://forem.com/itminds/high-availability-kubernetes-cluster-on-bare-metal-aip</guid>
      <description>&lt;p&gt;&lt;em&gt;This blog post is the first in a series concerning deploying a high available Kubernetes cluster on bare metal.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes is often used in the comfort of cloud providers, where we can spin up multiple master nodes without even caring about what goes on behind the scene. In this blog post, we will step away from the comforts of cloud-provided architecture and into the dangerous unknown environment of a bare-metal infrastructure. &lt;/p&gt;

&lt;p&gt;Using Kubernetes is great and comfortable because you don't have to worry about failing services, discomforts of scaling, downtimes, version upgrades and so much more. Though as always when working with distributed technologies you should count on machine failures, and this is where the term &lt;strong&gt;high availability&lt;/strong&gt; comes up. &lt;/p&gt;

&lt;p&gt;When using Kubernetes in a playground environment, we spin up a single master node and run our containers. In a production environment, this is not optimal for a number of reasons, but mostly because it is a single point of failure. Hence, we want a Kubernetes cluster to have multiple master nodes to deal with this problem. This is where it gets tricky on bare metal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Inside a master node
&lt;/h2&gt;

&lt;p&gt;Taking a step back, we need to understand what goes on in a Kubernetes master node. Normally containers only run on worker nodes and Kubernetes handles deployments, replications, pod communications, etc. If we look into the belly of a master node it contains pods for orchestration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;etcd&lt;/strong&gt; A key-value storage for cluster data, most importantly the &lt;em&gt;desired state&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;kube-scheduler&lt;/strong&gt; A component that watches for newly created pods and decides which node they should run on&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;kube-controller-manager&lt;/strong&gt; Manages a lot of Kubernetes functionality like listening for nodes going down, pod creation, etc. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;kube-apiserver&lt;/strong&gt; The frontend that handles communications with all the other pods. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These 4 pods constitute what is commonly known as the &lt;em&gt;control plane&lt;/em&gt; of a Kubernetes cluster. As stated, all communication goes through the kube-apiserver. When we execute &lt;code&gt;kubectl&lt;/code&gt; commands what happens behind-the-scene is that we send post requests to the kube-apiserver.&lt;/p&gt;

&lt;p&gt;The kube-apiserver is also where our problem arises when introducing multiple master nodes. When there are multiple master nodes, what kube-apiserver should I contact? If I just pick my favorite and it dies, what happens?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qd69zn6wy95nu1aks43.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qd69zn6wy95nu1aks43.png" alt="Kubernetes cluster with 1 master node"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The control plane
&lt;/h2&gt;

&lt;p&gt;The Kubernetes control plane is an abstraction and users should not have to worry about contacting different kube-apiservers and dealing with the fact that their favorite kube-apiserver might disappear and another one might spin up. This is why Kubernetes will not even let you join multiple master nodes to the same cluster, without handling this problem first. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxh3b172mvh70wxjbnsk.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxh3b172mvh70wxjbnsk.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Users of the cluster should only know one IP address to contact, and this IP address should be stable. &lt;/p&gt;

&lt;p&gt;A cloud provider will simply hand you a load balancer with a stable IP address and you are good to go, but this is a manual process in a bare metal setup.  &lt;/p&gt;

&lt;p&gt;So we need to set up a stable IP for the Kubernetes control plane, a job that can be done in a few ways. &lt;/p&gt;

&lt;p&gt;We set up a virtual IP for the control plane by installing HAProxy and Keepalived. In short, HAProxy uses ARP to broadcast that a virtual IP address should be translated to the physical machine's MAC address. This means that when anyone wants to contact the IP address we set up for the control plane the traffic will be redirected to this physical machine. The Keepalived service ensures that this machine can always be contacted, and if it can't the virtual IP will switch to another machine. This makes the virtual IP stable and can therefore be used as the IP address for the control plane.&lt;/p&gt;

&lt;p&gt;This approach is great and simple but depending on the implementation we might get into trouble. If we install it besides Kubernetes on our machine, what happens if the HAproxy or Keepalivd service fails? The whole master node will then be considered down, and because we manually need to go in and restart the service, we lose the orchestration benefits of Kubernetes. Well, then let us install it as pods inside Kubernetes. Then if one of the services fails, Kubernetes will just bring them back up again.&lt;/p&gt;

&lt;p&gt;Sadly this introduces a &lt;em&gt;chicken and egg&lt;/em&gt; situation:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Setup a stable IP in HAProxy and Keepalived to initiate a highly available Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;Setup a Kubernetes cluster so that you can host HAproxy and Keepalived in pods.&lt;/li&gt;
&lt;li&gt;Cry&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Static pods to the rescue
&lt;/h2&gt;

&lt;p&gt;Fortunately, I lied when I said that all communication to the cluster goes through the kube-apiserver. All the pods in the control plane cannot go through the normal process of contacting the kube-apiserver to get deployed, well because the kube-apiserver is not deployed yet. They are what is known as &lt;em&gt;static pods&lt;/em&gt; and get deployed simply by putting their YAML files in the right folder. In Linux this is /etc/kubernetes/manifests/ by default. All YAML files in this folder will get deployed with the control plane when we initialize our cluster. This seems like it solved our chicken and egg problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stay tuned
&lt;/h3&gt;

&lt;p&gt;Well, now we have gone through the theory needed to deploy a highly available Kubernetes cluster on bare metal. This means that we can spin up a cluster on a few of our old laptops or raspberry pis. &lt;/p&gt;

&lt;p&gt;If this peaked your interest, follow along next week when I actually show how easy it is to do this in practice.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>architecture</category>
      <category>distributedsystems</category>
    </item>
  </channel>
</rss>
