<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Teo Stocco</title>
    <description>The latest articles on Forem by Teo Stocco (@zifeo).</description>
    <link>https://forem.com/zifeo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/zifeo"/>
    <language>en</language>
    <item>
      <title>Low-cost Kubernetes cluster on Infomaniak</title>
      <dc:creator>Teo Stocco</dc:creator>
      <pubDate>Tue, 04 Jul 2023 22:14:29 +0000</pubDate>
      <link>https://forem.com/zifeo/low-cost-kubernetes-cluster-on-infomaniak-2jkm</link>
      <guid>https://forem.com/zifeo/low-cost-kubernetes-cluster-on-infomaniak-2jkm</guid>
      <description>&lt;p&gt;&lt;a href="https://kubernetes.io"&gt;Kubernetes&lt;/a&gt; (k8s) is becoming the de facto standard for container orchestration. While it provides convenient abstractions and solves many pain points, it is not as accessible as running a local copy of Docker. This makes it harder to learn and get real hands-on experience without being part of a DevOps team or getting ruined quickly.&lt;/p&gt;

&lt;p&gt;When looking at popular Kubernetes providers and without going into their complex pricing structure, the lowest price one can find for a managed Kubernetes is roughly $55/month for 4cpu and 15Go of RAM available (June 2023).&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Provider&lt;/th&gt;
&lt;th&gt;Region&lt;/th&gt;
&lt;th&gt;Cluster fees&lt;/th&gt;
&lt;th&gt;4cpu / 15Go&lt;/th&gt;
&lt;th&gt;Note&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GKE&lt;/td&gt;
&lt;td&gt;Frankfurt&lt;/td&gt;
&lt;td&gt;$73/month&lt;/td&gt;
&lt;td&gt;$126&lt;/td&gt;
&lt;td&gt;$74.40/month of free credit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AWS&lt;/td&gt;
&lt;td&gt;Frankfurt&lt;/td&gt;
&lt;td&gt;$73/month&lt;/td&gt;
&lt;td&gt;$112&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AKS&lt;/td&gt;
&lt;td&gt;West Germany&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;td&gt;$112&lt;/td&gt;
&lt;td&gt;$73/month for pro cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Linode&lt;/td&gt;
&lt;td&gt;Frankfurt&lt;/td&gt;
&lt;td&gt;$36&lt;/td&gt;
&lt;td&gt;$72&lt;/td&gt;
&lt;td&gt;Only 8Go RAM&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scaleway&lt;/td&gt;
&lt;td&gt;Paris&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;td&gt;$73&lt;/td&gt;
&lt;td&gt;98% API server availability&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OVH&lt;/td&gt;
&lt;td&gt;Frankfurt&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;td&gt;$55&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Exoscale&lt;/td&gt;
&lt;td&gt;Frankfurt&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;td&gt;$136&lt;/td&gt;
&lt;td&gt;$40/month for pro cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DigitalOcean&lt;/td&gt;
&lt;td&gt;Frankfurt&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;td&gt;$126&lt;/td&gt;
&lt;td&gt;$40/month for pro cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;At that price, it becomes interesting to look at reasonably priced VMs providers like Hetzner or Infomaniak, and install Kubernetes directly on it. This comes at the cost of managing the cluster ourselves and maintaining it over time. While this offers a good learning opportunity, it may also be a source of frustration as the setup can be non-trivial. For this reason, the &lt;a href="https://docs.rke2.io"&gt;RKE2&lt;/a&gt; distribution of Kubernetes is a great pick as it offers a good balance between features, security and simplicity.&lt;/p&gt;

&lt;p&gt;Regarding the hardware provider, Infomaniak offers currently better price than Hetzner and especially gives us with direct access to OpenStack, a well-known open source platform to provision infrastructure (also offered by OVH). This makes it easier to manage the resources using code (IaC) and comes with a good support for Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;You will need an &lt;a href="https://welcome.infomaniak.com/signup"&gt;account on Infomaniak&lt;/a&gt;, a valid credit card and &lt;a href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli"&gt;Terraform installed&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you have your Infomaniak account, go on the &lt;a href="https://www.infomaniak.com/fr/hebergement/public-cloud"&gt;Public Cloud landing page&lt;/a&gt; and click on "get started". Pick a name for your public cloud and enter your credit card details in the checkout page. You will be charged at the end of the month for the resources you used. Note that as of the time of writing this article, Infomaniak offers up to CHF 300.— of free credit during the first 3 months.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AVS5gC5a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pvhrzz2pc70dsc3hm0vt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AVS5gC5a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pvhrzz2pc70dsc3hm0vt.png" alt="Creating the cloud" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should then see an empty list of OpenStack project (also called tenant in the OpenStack jargon). Click on the "create a project" on the top right, choose a project name and a password for the autogenerated user and click on "create". After a few seconds, you should be redirected to the list of projects and see your newly created project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Zyc9nUj2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o1ihvtvfzxg8fn8f7kwu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Zyc9nUj2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o1ihvtvfzxg8fn8f7kwu.png" alt="List of OpenStack projects" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can then click on the project name, and you will be directed on &lt;a href="https://api.pub1.infomaniak.cloud/horizon/auth/login/"&gt;Horizon&lt;/a&gt;, the user interface of OpenStack which is useful to follow the progress of the cluster creation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--couEBL9Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kudwkrzkjyle7earqpog.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--couEBL9Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kudwkrzkjyle7earqpog.png" alt="Horizon welcome" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform OpenStack RKE2 module
&lt;/h2&gt;

&lt;p&gt;Now, you are jumping into the interesting part. You will use &lt;a href="https://registry.terraform.io/modules/zifeo/rke2/openstack/latest"&gt;the Terraform OpenStack RKE2 module&lt;/a&gt; that takes care of the heavy lifting and deploy a RKE2 cluster on OpenStack for you. Create a new directory and open a &lt;code&gt;main.tf&lt;/code&gt; file with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"project"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"username"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"password"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# authenticate with OpenStack&lt;/span&gt;
&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"openstack"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;tenant_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project&lt;/span&gt;
  &lt;span class="nx"&gt;user_name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;username&lt;/span&gt;
  &lt;span class="nx"&gt;password&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;password&lt;/span&gt;
  &lt;span class="nx"&gt;auth_url&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://api.pub1.infomaniak.cloud/identity"&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"dc3-a"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# dependency management&lt;/span&gt;
&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;gt;= 0.14.0"&lt;/span&gt;

  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;openstack&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-provider-openstack/openstack"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;gt;= 1.49.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates 3 variables that will be used to authenticate to the OpenStack API and configure the OpenStack provider and ensure that the latter is using the expected version. If you would like to avoid having to enter your credentials each time, you can create a &lt;code&gt;terraform.tfvars&lt;/code&gt; (make sure to add this file to your &lt;code&gt;gitignore&lt;/code&gt; and never accidentally share that file) or check the alternative &lt;a href="https://registry.terraform.io/providers/terraform-provider-openstack/openstack/latest/docs#configuration-reference"&gt;authentication methods&lt;/a&gt; of the provider (e.g. &lt;code&gt;cloud&lt;/code&gt; with &lt;code&gt;clouds.yaml&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;project=PCP-XXXXXXXX&lt;/span&gt;
&lt;span class="s"&gt;username=PCU-XXXXXXXX&lt;/span&gt;
&lt;span class="s"&gt;password=XXXXXXXX&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will now declare a cluster with 1 server and 1 agent nodes in &lt;code&gt;main.tf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"rke2"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"zifeo/rke2/openstack"&lt;/span&gt;
  &lt;span class="c1"&gt;# fixing the version is recommended (follows semantic versioning)&lt;/span&gt;
  &lt;span class="c1"&gt;# version = "2.0.5"&lt;/span&gt;

  &lt;span class="c1"&gt;# must be true for single server cluster or&lt;/span&gt;
  &lt;span class="c1"&gt;# only on the first run for high-availability cluster&lt;/span&gt;
  &lt;span class="nx"&gt;bootstrap&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"single-server"&lt;/span&gt;

  &lt;span class="c1"&gt;# path to your public key, in order to connect to the instances&lt;/span&gt;
  &lt;span class="nx"&gt;ssh_authorized_keys&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"~/.ssh/id_rsa.pub"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="c1"&gt;# name of the public OpenStack network to use for the server IP&lt;/span&gt;
  &lt;span class="nx"&gt;floating_pool&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ext-floating1"&lt;/span&gt;

  &lt;span class="c1"&gt;# allow access from any IP&lt;/span&gt;
  &lt;span class="c1"&gt;# it should ideally be restricted to a secure bastion&lt;/span&gt;
  &lt;span class="nx"&gt;rules_ssh_cidr&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;
  &lt;span class="nx"&gt;rules_k8s_cidr&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;

  &lt;span class="c1"&gt;# servers hosts Kubernetes control plane + etcd&lt;/span&gt;
  &lt;span class="c1"&gt;# and are the only ones exposed to the internet&lt;/span&gt;
  &lt;span class="nx"&gt;servers&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"server"&lt;/span&gt;

    &lt;span class="c1"&gt;# 2 cpu and 4Go of RAM is the minimum recommended per server&lt;/span&gt;
    &lt;span class="nx"&gt;flavor_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"a2-ram4-disk0"&lt;/span&gt;
    &lt;span class="nx"&gt;image_name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Ubuntu 22.04 LTS Jammy Jellyfish"&lt;/span&gt;
    &lt;span class="nx"&gt;system_user&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu"&lt;/span&gt;

    &lt;span class="c1"&gt;# size of the operating system disk&lt;/span&gt;
    &lt;span class="nx"&gt;boot_volume_size&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;

    &lt;span class="c1"&gt;# size of the volume for the RKE2 data (persisted on single-server)&lt;/span&gt;
    &lt;span class="nx"&gt;rke2_volume_size&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;
    &lt;span class="nx"&gt;rke2_version&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"v1.26.4+rke2r1"&lt;/span&gt;
  &lt;span class="p"&gt;}]&lt;/span&gt;

  &lt;span class="c1"&gt;# agents runs your workloads&lt;/span&gt;
  &lt;span class="c1"&gt;# and are not exposed to the internet (doable with a load balancer)&lt;/span&gt;
  &lt;span class="nx"&gt;agents&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"agent-a"&lt;/span&gt;
      &lt;span class="nx"&gt;nodes_count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;

      &lt;span class="c1"&gt;# a2-ram4-disk0 is the minimal meaningful config for agents&lt;/span&gt;
      &lt;span class="c1"&gt;# you can also directly go for a4-ram16-disk0 as in the intro&lt;/span&gt;
      &lt;span class="nx"&gt;flavor_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"a2-ram4-disk0"&lt;/span&gt;
      &lt;span class="nx"&gt;image_name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Ubuntu 22.04 LTS Jammy Jellyfish"&lt;/span&gt;
      &lt;span class="nx"&gt;system_user&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu"&lt;/span&gt;

      &lt;span class="nx"&gt;boot_volume_size&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;

      &lt;span class="nx"&gt;rke2_volume_size&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;
      &lt;span class="nx"&gt;rke2_version&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"v1.26.4+rke2r1"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="c1"&gt;# enable automatically agent removal of the cluster&lt;/span&gt;
  &lt;span class="nx"&gt;ff_autoremove_agent&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="c1"&gt;# output the kubeconfig to the current directory&lt;/span&gt;
  &lt;span class="nx"&gt;ff_write_kubeconfig&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;identity_endpoint&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://api.pub1.infomaniak.cloud/identity"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And you are ready to go! Run the following commands and wait a few minutes for the cluster to be created (a few more are required to have all the pods running after the core is ready):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
&lt;span class="c"&gt;# ...&lt;/span&gt;
&lt;span class="c"&gt;# Terraform has been successfully initialized!&lt;/span&gt;
&lt;span class="c"&gt;# ...&lt;/span&gt;

terraform apply
&lt;span class="c"&gt;# ...&lt;/span&gt;
&lt;span class="c"&gt;# Plan: 71 to add, 0 to change, 0 to destroy.&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# Do you want to perform these actions?&lt;/span&gt;
&lt;span class="c"&gt;#   Terraform will perform the actions described above.&lt;/span&gt;
&lt;span class="c"&gt;#   Only 'yes' will be accepted to approve.&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;#   Enter a value: yes&lt;/span&gt;
&lt;span class="c"&gt;# ...&lt;/span&gt;
&lt;span class="c"&gt;# Apply complete! Resources: 71 added, 0 changed, 0 destroyed.&lt;/span&gt;

&lt;span class="nb"&gt;cat &lt;/span&gt;single-server.rke2.yaml
&lt;span class="c"&gt;# apiVersion: v1&lt;/span&gt;
&lt;span class="c"&gt;# kind: config&lt;/span&gt;
&lt;span class="c"&gt;# ...&lt;/span&gt;

&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;single-server.rke2.yaml

kubectl get pods &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt;
&lt;span class="c"&gt;# NAMESPACE     NAME                                                    READY   STATUS              RESTARTS   AGE&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-openstack-cinder-csi-2rp9z                 0/1     Pending             0          2s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-openstack-cloud-controller-manager-4wdzt   0/1     ContainerCreating   0          2s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-rke2-cilium-s5skd                          0/1     ContainerCreating   0          2s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-rke2-coredns-kc4ld                         0/1     ContainerCreating   0          2s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-rke2-metrics-server-ttt84                  0/1     Pending             0          2s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-rke2-snapshot-controller-crd-2sdzt         0/1     Pending             0          2s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-rke2-snapshot-controller-xqzsk             0/1     Pending             0          2s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-rke2-snapshot-validation-webhook-5w9lw     0/1     Pending             0          2s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-velero-4zhq7                               0/1     Pending             0          2s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   kube-apiserver-single-server-server-1                   1/1     Running             0          12s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   kube-controller-manager-single-server-server-1          1/1     Running             0          16s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   kube-scheduler-single-server-server-1                   1/1     Running             0          15s&lt;/span&gt;

kubectl get pods &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt;
&lt;span class="c"&gt;# NAMESPACE     NAME                                                    READY   STATUS      RESTARTS   AGE&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   cilium-ngcrp                                            1/1     Running     0          2m39s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   cilium-operator-b947b9d8d-zc92l                         1/1     Running     0          2m39s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   cilium-qt7vr                                            1/1     Running     0          2m39s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   etcd-single-server-server-1                             1/1     Running     0          2m42s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-openstack-cinder-csi-2rp9z                 0/1     Completed   0          2m52s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-openstack-cloud-controller-manager-4wdzt   0/1     Completed   0          2m52s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-rke2-cilium-s5skd                          0/1     Completed   0          2m52s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-rke2-coredns-kc4ld                         0/1     Completed   0          2m52s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-rke2-metrics-server-ttt84                  0/1     Completed   0          2m52s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-rke2-snapshot-controller-crd-2sdzt         0/1     Completed   0          2m52s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-rke2-snapshot-controller-xqzsk             0/1     Completed   0          2m52s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-rke2-snapshot-validation-webhook-5w9lw     0/1     Completed   0          2m52s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   helm-install-velero-4zhq7                               0/1     Completed   0          2m52s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   kube-apiserver-single-server-server-1                   1/1     Running     0          3m2s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   kube-controller-manager-single-server-server-1          1/1     Running     0          3m6s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   kube-scheduler-single-server-server-1                   1/1     Running     0          3m5s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   openstack-cinder-csi-controllerplugin-cf5f9869d-xbmtv   6/6     Running     0          2m39s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   openstack-cinder-csi-nodeplugin-zghjj                   3/3     Running     0          98s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   openstack-cloud-controller-manager-bffxb                1/1     Running     0          2m19s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   rke2-coredns-rke2-coredns-autoscaler-597fb897d7-p8k7j   1/1     Running     0          2m41s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   rke2-coredns-rke2-coredns-f6f4ff467-6lrgl               1/1     Running     0          83s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   rke2-coredns-rke2-coredns-f6f4ff467-shlrv               1/1     Running     0          2m41s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   rke2-metrics-server-67d6554d69-8vhrt                    1/1     Running     0          78s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   rke2-snapshot-controller-6b9c678c77-2txzn               1/1     Running     0          77s&lt;/span&gt;
&lt;span class="c"&gt;# kube-system   rke2-snapshot-validation-webhook-6c9d7f868c-qnqdq       1/1     Running     0          77s&lt;/span&gt;
&lt;span class="c"&gt;# velero        restic-g8mvb                                            1/1     Running     0          30s&lt;/span&gt;
&lt;span class="c"&gt;# velero        velero-5b67659997-67zgd                                 1/1     Running     0          30s&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also explore your cluster using Horizon (see instances and components of the network).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1XRY4uD1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/axvbvvfiiu9gc4fl7jie.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1XRY4uD1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/axvbvvfiiu9gc4fl7jie.png" alt="List of instances" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations, you just have installed your first Kubernetes cluster with RKE2! You can find more information about the module and its features (etcd backups, upgrades, volume snapshots, etc.) on the &lt;a href="https://github.com/zifeo/terraform-openstack-rke2"&gt;repository&lt;/a&gt;. Give a star ⭐️ if you like it or raise an issue there if you find a bug 🐛.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Originally posted on &lt;a href="https://zifeo.com/articles/230617-low-cost-k8s"&gt;zifeo.com&lt;/a&gt;, find more about the architecture and the cost projections there.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
