<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Mats Brorsson</title>
    <description>The latest articles on Forem by Mats Brorsson (@matsbror).</description>
    <link>https://forem.com/matsbror</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/matsbror"/>
    <language>en</language>
    <item>
      <title>Installing Kubernetes</title>
      <dc:creator>Mats Brorsson</dc:creator>
      <pubDate>Fri, 05 Mar 2021 08:48:35 +0000</pubDate>
      <link>https://forem.com/matsbror/installing-kubernetes-55lj</link>
      <guid>https://forem.com/matsbror/installing-kubernetes-55lj</guid>
      <description>&lt;p&gt;Originally posted at &lt;a href="https://www.embeinnovation.com/" rel="noopener noreferrer"&gt;EMBE Innovation&lt;/a&gt; where you can find other blog posts.&lt;/p&gt;

&lt;h1&gt;
  
  
  Yet another installing Kubernetes tutorial
&lt;/h1&gt;

&lt;p&gt;I know, why yet another installing Kubernetes tutorial? For a research project, I need to get a Kubernetes cluster running on a set of virtual x86-64-nodes as well as on a set of Nvidia jetson cards in the same cluster. I found no tutorial that covered this exact use-case (strange, I would have thought this would be common-place), so I am using this as "notes to myself" so that I can recreate and possibly automate the process in the future.&lt;/p&gt;

&lt;p&gt;I am using this &lt;a href="https://www.tutorialspoint.com/kubernetes/kubernetes_setup.htm" rel="noopener noreferrer"&gt;tutorial from Tutorials point&lt;/a&gt; as inspiration and this &lt;a href="https://vitux.com/install-and-deploy-kubernetes-on-ubuntu/#:~:text=Kubernetes%20Installation%201%20Install%20Docker%20on%20both%20the,Repository%20on%20both%20the%20nodes%20More%20items...%20" rel="noopener noreferrer"&gt;tutorial from vitux&lt;/a&gt;, and &lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="noopener noreferrer"&gt;this&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For the installation on the Jetson cards, &lt;a href="https://medium.com/jit-team/building-a-gpu-enabled-kubernets-cluster-for-machine-learning-with-nvidia-jetson-nano-7b67de74172a" rel="noopener noreferrer"&gt;this tutorial&lt;/a&gt; was crucial.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preparing the nodes
&lt;/h2&gt;

&lt;p&gt;My cluster consists of three x86-64 nodes running Ubuntu 20.04.1 LTS and three Nvidia Jetson Nano Devloper cards running Ubuntu 18.04.5 LTS. The x86-64 nodes are vmWare virtual machines with two cores each and the Jetson cards have each a quad-core ARM Cortex-A57 processor and an NVIDIA Maxwell architecture with 128 NVIDIA CUDA® cores.&lt;/p&gt;

&lt;p&gt;Make sure each node is updated and upgraded with the latest patches.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt upgrade
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setting up the Kubernetes nodes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Set up docker
&lt;/h3&gt;

&lt;p&gt;This turned out to be more difficult than expected. I tried various sources and at the end, what worked was the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
sudo apt update
sudo apt install docker-ce=5:19.03.14~3-0~ubuntu-focal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nore that you need to make sure docker uses &lt;code&gt;systemd&lt;/code&gt; as cgroup driver as follows.&lt;br&gt;
Create file (if it does not exist) &lt;code&gt;/etc/docker/daemon.json&lt;/code&gt; and put the following in that file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the Nvidia jetson nano cards, the contents of &lt;code&gt;/etc/docker/daemon.json&lt;/code&gt; should instead be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "100m"
    },
    "storage-driver": "overlay2"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart docker service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo service docker restart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Set up etcd
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;: this is only needed for the master/control-plane.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install etcd etcd-client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Set up kubernetes
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
sudo apt update
sudo apt install kubeadm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check that kubeadm was correctly installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubeadm version
kubeadm version: &amp;amp;version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:25:59Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then turn of swapping:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo swapoff -a
sudo sed -i '/swap/ s/^\(.*\)$/#\1/g' /etc/fstab
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: it was important in my case to use &lt;code&gt;/swap/&lt;/code&gt; as pattern matching, I tried &lt;code&gt;/ swap /&lt;/code&gt; and it did not work.&lt;/p&gt;

&lt;p&gt;The first command above turns swapping off and the second one makes sure it is not turned on again at reboot.&lt;/p&gt;

&lt;p&gt;Make sure each node has a unique hostname and that you remember which is the master. In my case I run this on the Kubernetes master:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo hostnamectl set-hostname k8s1-master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and this on each worker node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo hostnamectl set-hostname k8s&amp;lt;n&amp;gt;-worker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;where &lt;code&gt;&amp;lt;n&amp;gt;&lt;/code&gt; is replaced with a numer (2,3 etc).&lt;/p&gt;

&lt;h2&gt;
  
  
  Start Kubernetes on the master node
&lt;/h2&gt;

&lt;p&gt;Start Kubernets with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo kubeadm init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everthing goes well, you should end up with the following lines at the end:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.xxx.yyy.105:6443 --token b0qc...lt \
    --discovery-token-ca-cert-hash sha256:b7ed95...d90b5b4f2b6f51814
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure you save the last command somewhere so that you can run it on the worker nodes. &lt;/p&gt;

&lt;p&gt;You should do as it says and set up the configuration script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Download and install the Calico networking manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Have a worker node join the Kubernetes cluster
&lt;/h2&gt;

&lt;p&gt;Use the &lt;code&gt;kubeadm&lt;/code&gt; command above to join a new worker to the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubeadm join 10.xxx.yyy.105:6443 --token b0qc...lt \
    --discovery-token-ca-cert-hash sha256:b7ed95...d90b5b4f2b6f51814
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Make use of the cluster
&lt;/h1&gt;

&lt;p&gt;This is what I ended up with:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hv0z5ldfmbg2z1grz8s.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hv0z5ldfmbg2z1grz8s.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here you can see the three Nvidia Jetson boards. They are actually mounted on RC-car chassis to be used in autonomous vehicles courses, but I can make use of them as nodes in my Kubernetes cluster. The node marked &lt;code&gt;k8s1&lt;/code&gt; is the master node of this cluster. It and the other virtual machines are standard x86-64 nodes and the Nvidia nano cards, as mentioned, they are of ARM64 architecture.&lt;/p&gt;

&lt;p&gt;This is the Kubernetes view of the cluster nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
360lab-nano0   Ready    &amp;lt;none&amp;gt;                 20d   v1.20.2
360lab-nano2   Ready    &amp;lt;none&amp;gt;                 20d   v1.20.2
360lab-nano4   Ready    &amp;lt;none&amp;gt;                 20d   0.5.0
k8s1-master    Ready    control-plane,master   20d   v1.20.2
k8s2-worker    Ready    &amp;lt;none&amp;gt;                 20d   0.5.0
k8s3-worker    Ready    &amp;lt;none&amp;gt;                 20d   v1.20.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;They all run the v1.20.2 of Kubernetes except &lt;code&gt;nano4&lt;/code&gt; and &lt;code&gt;k8s2&lt;/code&gt; which instead run &lt;code&gt;kurstlet&lt;/code&gt; which is a replacement of &lt;code&gt;kubelet&lt;/code&gt; (the agent responsible for talking to the kubernetes master and receiving instructions on what containers to execute) and which can run WebAssembly/WASI modules instead of Docker containers. I will soon post a new blog post about running WebAssembly in Kubernetes in various ways.&lt;/p&gt;

&lt;p&gt;Docker containers, which is by far the most common execution vehicle in Kubernetes, are architecture and OS dependent. On an x86-64-node with Linux, you can only run Linux containers compiled/built for x86-64. This is recognised automatically so when you deploy a manifest for running a container, only the matchine nodes are eligible. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/buildx/working-with-buildx/" rel="noopener noreferrer"&gt;Docker Buildx&lt;/a&gt; is, however, a docker cli plugin that comes to our rescue. In the best of worlds, you can build a multi-architecture docker container like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker buildx build &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/amd64,linux/arm64 &lt;span class="nt"&gt;-t&lt;/span&gt; matsbror/hello-arch:latest &lt;span class="nt"&gt;--push&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, it's a small Python flask app which returns some information about the environment in it. The cross-platform docker creation is pretty slow since it fires up a &lt;a href="https://www.qemu.org/" rel="noopener noreferrer"&gt; simulator&lt;/a&gt; of the ARM architecture (assuming you are on an x86-64 architecture) to do the building, but it works pretty painlessly, at least for interpreted languages like Python. For compiled languages like C++ and Rust it's considerably more complicated.&lt;/p&gt;

&lt;p&gt;The following manifest can be used to deploy this container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-arch
  labels:
    app: hello-arch
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-arch
  template:
    metadata:
      labels:
        app: hello-arch
    spec:
      containers:
      - image: matsbror/hello-arch
        imagePullPolicy: Always
        name: hello
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      imagePullSecrets:
      - name: regcred
      nodeSelector:
        kubernetes.io/arch: arm64
        #kubernetes.io/arch=amd64
---
apiVersion: v1
kind: Service
metadata:
  name: hello-svc
  labels:
    app: hello-arch
spec:
  type: NodePort
  ports:
  - port: 5000
    nodePort: 30001
    targetPort: 5000
    protocol: TCP
  selector:
    app: hello-arch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It creates a deployment and a NodePort service which responds to port 5000. The &lt;code&gt;nodeSelector&lt;/code&gt; key defines which architecture this container can run at. First we deply it with the &lt;code&gt;arm64 specification&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; hello-service.yaml
deployment.apps/hello-arch created
service/hello-svc created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After some time, the cluster has started the container on one of the arm64 nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl k8s1.uni.lux:30001
{
  "result": "Flask inside Docker!!",
  "system": [
    "Linux",
    "hello-arch-65d5b8f665-b8jdg",
    "4.9.140-tegra",
    "#1 SMP PREEMPT Tue Oct 27 21:02:37 PDT 2020",
    "aarch64"
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see it's running on an &lt;code&gt;aarch64&lt;/code&gt; (same as ARM64) architecture with the 4.9.140-tegra kernel.&lt;/p&gt;

&lt;p&gt;Let's tear the service and deployment down and start with the amd64 architecture specification instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl delete service/hello-svc deployment.apps/hello-arch
service "hello-svc" deleted
deployment.apps "hello-arch" deleted$ 

# change arm64 in the manifest to amd64

$ kubectl apply -f hello-service.yaml
deployment.apps/hello-arch created
service/hello-svc created

$ curl k8s1.uni.lux:30001
{
  "result": "Flask inside Docker!!",
  "system": [
    "Linux",
    "hello-arch-b7fb4c8ff-blkg8",
    "5.4.0-65-generic",
    "#73-Ubuntu SMP Mon Jan 18 17:25:17 UTC 2021",
    "x86_64"
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the response indeed indicates that the same container runs on an x86_64 (same as AMD64) architecture. &lt;/p&gt;

&lt;p&gt;Next time, I will explain why this is probably not a good idea when you have multiple architectures and show how WebAssembly might be part of the solution. &lt;/p&gt;

&lt;p&gt;Follow me at &lt;a href="https://twitter.com/matsbrorsson" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/matsbrorsson/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>kubernetes</category>
      <category>heteregenous</category>
    </item>
  </channel>
</rss>
