<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Arman Shafiei</title>
    <description>The latest articles on Forem by Arman Shafiei (@arman-shafiei).</description>
    <link>https://forem.com/arman-shafiei</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/arman-shafiei"/>
    <language>en</language>
    <item>
      <title>Deploy Dragonfly Replication</title>
      <dc:creator>Arman Shafiei</dc:creator>
      <pubDate>Sat, 16 Aug 2025 10:27:25 +0000</pubDate>
      <link>https://forem.com/arman-shafiei/deploy-dragonfly-replication-i6i</link>
      <guid>https://forem.com/arman-shafiei/deploy-dragonfly-replication-i6i</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the fast-paced realm of modern data management, where speed, scalability, and reliability are non-negotiable, Dragonfly emerges as a compelling open-source alternative to traditional in-memory stores like Redis. Built for efficiency and compatibility, Dragonfly delivers exceptional performance while consuming fewer resources, making it an ideal choice for high-throughput applications. However, as your system grows, safeguarding against data loss and downtime requires more than just raw speed—it demands robust redundancy.&lt;/p&gt;

&lt;p&gt;In this post, we will deploy Dragonfly in replication mode, utilizing Dragonfly itself and Redis Sentinel as the high-availability solution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpdl8i0m7o1m9rwlhdiwe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpdl8i0m7o1m9rwlhdiwe.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;For this scenario, we have 3 Linux machines with the following specifications:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Instance&lt;/th&gt;
&lt;th&gt;OS&lt;/th&gt;
&lt;th&gt;IP&lt;/th&gt;
&lt;th&gt;ROLE&lt;/th&gt;
&lt;th&gt;Services&lt;/th&gt;
&lt;th&gt;Version&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;node-1&lt;/td&gt;
&lt;td&gt;Debian 12&lt;/td&gt;
&lt;td&gt;192.168.1.10&lt;/td&gt;
&lt;td&gt;Master&lt;/td&gt;
&lt;td&gt;Dragonfly,Sentinel&lt;/td&gt;
&lt;td&gt;v1.32.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;node-2&lt;/td&gt;
&lt;td&gt;Debian 12&lt;/td&gt;
&lt;td&gt;192.168.1.11&lt;/td&gt;
&lt;td&gt;Slave&lt;/td&gt;
&lt;td&gt;Dragonfly,Sentinel&lt;/td&gt;
&lt;td&gt;v1.32.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;node-3&lt;/td&gt;
&lt;td&gt;Debian 12&lt;/td&gt;
&lt;td&gt;192.168.1.12&lt;/td&gt;
&lt;td&gt;Slave&lt;/td&gt;
&lt;td&gt;Dragonfly,Sentinel&lt;/td&gt;
&lt;td&gt;v1.32.0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Also, we have to open ports &lt;strong&gt;6379&lt;/strong&gt; and &lt;strong&gt;26379&lt;/strong&gt; on each of the instances.&lt;/p&gt;

&lt;p&gt;We will need to install Dragonfly and Redis Sentinel on these instances.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install Dragonfly
&lt;/h2&gt;

&lt;p&gt;First things first, we need to install Dragonfly on all our instances.&lt;br&gt;
Here we've used the binary method. For this purpose, visit the address &lt;a href="https://www.dragonflydb.io/docs/getting-started/binary" rel="noopener noreferrer"&gt;Download Dragonfly&lt;/a&gt; and choose the version for your OS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8q2zv2zpvbtr43fjx8fw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8q2zv2zpvbtr43fjx8fw.png" alt=" " width="730" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we choose &lt;code&gt;Download Latest (AMD64 Debian)&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Install the executable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@node-1:~# apt-get &lt;span class="nb"&gt;install&lt;/span&gt; ./dragonfly_amd64.deb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And now verify the installation.&lt;br&gt;
node 1:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@node-1:~# dragonfly &lt;span class="nt"&gt;--version&lt;/span&gt;

dragonfly v1.33.1-bba344ba8f49c4e2e1e411d7e1a5ba30119c4f80
build &lt;span class="nb"&gt;time&lt;/span&gt;: 2025-08-03 19:28:00
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;node 2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@node-2:~# dragonfly &lt;span class="nt"&gt;--version&lt;/span&gt;

dragonfly v1.33.1-bba344ba8f49c4e2e1e411d7e1a5ba30119c4f80
build &lt;span class="nb"&gt;time&lt;/span&gt;: 2025-08-03 19:28:00
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;node 3:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@node-3:~# dragonfly &lt;span class="nt"&gt;--version&lt;/span&gt;

dragonfly v1.33.1-bba344ba8f49c4e2e1e411d7e1a5ba30119c4f80
build &lt;span class="nb"&gt;time&lt;/span&gt;: 2025-08-03 19:28:00
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Dragonfly Configuration
&lt;/h2&gt;

&lt;p&gt;The configurations are stored in &lt;code&gt;/etc/dragonfly/dragonfly.conf&lt;/code&gt;. The service uses the options in this file as the running flags for the running instance. Edit or create the file &lt;strong&gt;dragonfly.conf&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@node-1:~# vim /etc/dragonfly/dragonfly.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And insert the following options:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--pidfile=/var/run/dragonfly/dragonfly.pid
--log_dir=/var/log/dragonfly
--dir=/var/lib/dragonfly
--max_log_size=20
--version_check=true
--port=6379
--bind=192.168.1.10
--dbfilename=dump
--logtostdout=false
--maxmemory=2gb
--maxclients=50
--requirepass=A_STRONG_PASSWORD
--tcp_keepalive=300
--cache_mode=true
--snapshot_cron=*/30 * * * *
--keys_output_limit=16192
--masterauth=A_STRONG_PASSWORD
--proactor_threads=1
--conn_io_threads=2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; Dragonfly process runs as user &lt;strong&gt;dfly&lt;/strong&gt;. Make sure that the Dragonfly files are owned by this user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@node-1:~# &lt;span class="nb"&gt;chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; dfly:dfly /var/lib/dragonfly
root@node-1:~# &lt;span class="nb"&gt;chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; dfly:dfly /var/log/dragonfly
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now restart the Service and verify its health:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@node-1:~# systemctl restart dragonfly.service
root@node-1:~# systemctl status dragonfly.service

🟢 dragonfly.service - Modern and fast key-value store
     Loaded: loaded &lt;span class="o"&gt;(&lt;/span&gt;/lib/systemd/system/dragonfly.service&lt;span class="p"&gt;;&lt;/span&gt; enabled&lt;span class="p"&gt;;&lt;/span&gt; preset: enabled&lt;span class="o"&gt;)&lt;/span&gt;
     Active: active &lt;span class="o"&gt;(&lt;/span&gt;running&lt;span class="o"&gt;)&lt;/span&gt; since Wed 2025-08-13 17:04:30 +0330&lt;span class="p"&gt;;&lt;/span&gt; 1s ago
   Main PID: 545394 &lt;span class="o"&gt;(&lt;/span&gt;dragonfly&lt;span class="o"&gt;)&lt;/span&gt;
      Tasks: 2 &lt;span class="o"&gt;(&lt;/span&gt;limit: 2255&lt;span class="o"&gt;)&lt;/span&gt;
     Memory: 55.5M
        CPU: 237ms
     CGroup: /system.slice/dragonfly.service
             └─545394 /usr/bin/dragonfly &lt;span class="nt"&gt;--flagfile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/etc/dragonfly/dragonfly.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; Do the above steps on all instances(in our case 3 nodes).&lt;/p&gt;

&lt;h3&gt;
  
  
  Dragonfly Options Explanation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;pidfile:&lt;/strong&gt; The location of the service pid file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;log_dir:&lt;/strong&gt; Where to store the log files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;dir:&lt;/strong&gt; The directory to store the DB data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;max_log_size:&lt;/strong&gt; The maximum size in MB of each log file(i.e. ERROR, WARNING and INFO).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;version_check:&lt;/strong&gt; Whether or not to check for the new version of the service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;port:&lt;/strong&gt; The port service listens on.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;bind:&lt;/strong&gt; Which IP to bind to. The clients will have to use this IP address to connect to.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;dbfilename:&lt;/strong&gt; The DB backup file to create and store data in. Dragonfly uses ".dfs" extension for its dump files(e.g., dump-0001.dfs).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;logtostdout:&lt;/strong&gt; if true, logs to standard output.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;maxmemory:&lt;/strong&gt; The maximum memory the Dragonfly database is allowed to consume in memory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;maxclients:&lt;/strong&gt; Maximum number of client connections that are allowed to be established.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;requirepass:&lt;/strong&gt; The root Dragonfly password.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;tcp_keepalive:&lt;/strong&gt; How many seconds to let an idle connection stay open.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;cache_mode:&lt;/strong&gt; If true, the backend behaves like a cache, by evicting entries when getting close to the maxmemory limit&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;snapshot_cron:&lt;/strong&gt; Cron expression to save DB to the disk.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;keys_output_limit:&lt;/strong&gt; Maximum number of keys output by the &lt;code&gt;KEYS&lt;/code&gt; command.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;masterauth:&lt;/strong&gt; The password to authenticate with master in replication mode.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;proactor_threads:&lt;/strong&gt; The number of reserved io-threads in the pool. These threads consume io-threads when the service starts running. For example, if this value is set to 1 and there are 2 core CPUs in the machine, 50% of the IO capacity gets busy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;conn_io_threads:&lt;/strong&gt; The number of threads to handle server connections.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; In some Linux instances(usually with older Kernels), we must force Dragonfly to use epol mode. Otherwise, the service won't start. For that, the following configs can be added to the config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--force_epoll
--epoll_file_threads=2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create Replication
&lt;/h2&gt;

&lt;p&gt;Now that the instances are running, we can configure the replication. For this scenario, we set node-1 as master and the other 2 nodes as slaves.&lt;/p&gt;

&lt;p&gt;Connect to Dragonfly server on node-2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@node-2:~# redis-cli &lt;span class="nt"&gt;-h&lt;/span&gt; 192.168.1.11 &lt;span class="nt"&gt;-p&lt;/span&gt; 6379
192.168.1.11:6379&amp;gt; auth &lt;span class="k"&gt;*****&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And execute the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;192.168.1.11:6379&amp;gt; REPLICAOF 192.168.1.10 6379
192.168.1.11:6379&amp;gt; ROLE
1&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"slave"&lt;/span&gt;
2&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"192.168.1.10"&lt;/span&gt;
3&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"6379"&lt;/span&gt;
4&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"online"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also on node-3:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@node-3:~# redis-cli &lt;span class="nt"&gt;-h&lt;/span&gt; 192.168.1.12 &lt;span class="nt"&gt;-p&lt;/span&gt; 6379
192.168.1.12:6379&amp;gt; auth &lt;span class="k"&gt;*****&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And execute the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;192.168.1.12:6379&amp;gt; REPLICAOF 192.168.1.10 6379
192.168.1.12:6379&amp;gt; ROLE
1&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"slave"&lt;/span&gt;
2&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"192.168.1.10"&lt;/span&gt;
3&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"6379"&lt;/span&gt;
4&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"online"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At last, connect to node-1 and promote it to master:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@node-1:~# redis-cli &lt;span class="nt"&gt;-h&lt;/span&gt; 192.168.1.10 &lt;span class="nt"&gt;-p&lt;/span&gt; 6379
192.168.1.10:6379&amp;gt; auth &lt;span class="k"&gt;*****&lt;/span&gt;
192.168.1.10:6379&amp;gt; REPLICAOF NO ONE
192.168.1.10:6379&amp;gt; ROLE
1&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"master"&lt;/span&gt;
2&lt;span class="o"&gt;)&lt;/span&gt; 1&lt;span class="o"&gt;)&lt;/span&gt; 1&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"192.168.1.11"&lt;/span&gt;
      2&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"6379"&lt;/span&gt;
      3&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"online"&lt;/span&gt;
   2&lt;span class="o"&gt;)&lt;/span&gt; 1&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"192.168.1.12"&lt;/span&gt;
      2&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"6379"&lt;/span&gt;
      3&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="s2"&gt;"online"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install Redis Sentinel
&lt;/h2&gt;

&lt;p&gt;Redis Sentinel is used for high availability and automatic failover in Dragonfly deployments. It monitors instances, detects failures, and automatically promotes a replica to master if the master fails.&lt;/p&gt;

&lt;p&gt;To install Redis Sentinel on Debian-based systems, use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@node-1:~# apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install &lt;/span&gt;redis-sentinel
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Edit redis-sentinel config file located in &lt;code&gt;/etc/redis/sentinel.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@node-1:~# vim /etc/redis/sentinel.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And check or add the following configs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;port 26379
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 3000
sentinel failover-timeout mymaster 8000
sentinel parallel-syncs mymaster 1
sentinel auth-pass mymaster REDIS_ROOT_PASSWORD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; The &lt;code&gt;REDIS_ROOT_PASSWORD&lt;/code&gt; is the password set in Dragonfly configuration by the &lt;code&gt;requirepass&lt;/code&gt; flag.&lt;/p&gt;

&lt;p&gt;Restart and verify the redis-sentinel service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@node-1:~# systemctl restart redis-sentinel.service
root@node-1:~# systemctl status redis-sentinel.service

🟢 redis-sentinel.service - Advanced key-value store
     Loaded: loaded &lt;span class="o"&gt;(&lt;/span&gt;/lib/systemd/system/redis-sentinel.service&lt;span class="p"&gt;;&lt;/span&gt; enabled&lt;span class="p"&gt;;&lt;/span&gt; preset: enabled&lt;span class="o"&gt;)&lt;/span&gt;
     Active: active &lt;span class="o"&gt;(&lt;/span&gt;running&lt;span class="o"&gt;)&lt;/span&gt; since Fri 2025-08-15 22:47:28 +0330&lt;span class="p"&gt;;&lt;/span&gt; 5s ago
       Docs: http://redis.io/documentation,
             man:redis-sentinel&lt;span class="o"&gt;(&lt;/span&gt;1&lt;span class="o"&gt;)&lt;/span&gt;
   Main PID: 549960 &lt;span class="o"&gt;(&lt;/span&gt;redis-sentinel&lt;span class="o"&gt;)&lt;/span&gt;
     Status: &lt;span class="s2"&gt;"Ready to accept connections"&lt;/span&gt;
      Tasks: 5 &lt;span class="o"&gt;(&lt;/span&gt;limit: 2255&lt;span class="o"&gt;)&lt;/span&gt;
     Memory: 10.8M
        CPU: 114ms
     CGroup: /system.slice/redis-sentinel.service
             └─549960 &lt;span class="s2"&gt;"/usr/bin/redis-sentinel *:26379 [sentinel]"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; Do the above steps on all instances(3 nodes in our case).&lt;/p&gt;

&lt;p&gt;Verify the sentinel's status by running this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;192.168.1.12:26379&amp;gt; INFO sentinel
&lt;span class="c"&gt;# Sentinel&lt;/span&gt;
sentinel_masters:1
sentinel_tilt:0
sentinel_tilt_since_seconds:-1
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
sentinel_simulate_failure_flags:0
master0:name&lt;span class="o"&gt;=&lt;/span&gt;mymaster,status&lt;span class="o"&gt;=&lt;/span&gt;ok,address&lt;span class="o"&gt;=&lt;/span&gt;192.168.1.10:6379,slaves&lt;span class="o"&gt;=&lt;/span&gt;2,sentinels&lt;span class="o"&gt;=&lt;/span&gt;3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Dragonfly Replica is now ready!&lt;/p&gt;

&lt;p&gt;Now the applications can connect to the Sentinels, and they will handle the connections and send requests to the master node.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this blog, we implemented replication for three Dragonfly instances with Redis-Sentinel. We walked through the configuration of Dragonfly for replication, integrated Redis Sentinel for failover, and optimized the configuration to get an acceptable performance.&lt;/p&gt;

&lt;p&gt;To get more info about the services and their respective options, please visit the following sites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.dragonflydb.io/" rel="noopener noreferrer"&gt;dragonflydb.io site&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/dragonflydb/dragonfly" rel="noopener noreferrer"&gt;dragonflydb.io Github&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://redis.io/docs/latest/operate/oss_and_stack/management/sentinel/" rel="noopener noreferrer"&gt;redis.io Site&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/redis/redis" rel="noopener noreferrer"&gt;redis.io Github&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thank you for reading this article and please leave a comment if you have any suggestions or figured out an issue with this post.&lt;/p&gt;

</description>
      <category>dragonfly</category>
      <category>redis</category>
    </item>
    <item>
      <title>Deploy Nginx Load Balancer for Rancher</title>
      <dc:creator>Arman Shafiei</dc:creator>
      <pubDate>Wed, 03 Apr 2024 22:53:40 +0000</pubDate>
      <link>https://forem.com/arman-shafiei/deploy-nginx-load-balancer-for-rancher-lgk</link>
      <guid>https://forem.com/arman-shafiei/deploy-nginx-load-balancer-for-rancher-lgk</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the dynamic landscape of container orchestration, efficient load balancing is paramount. As organizations adopt Kubernetes and Rancher rke2 for managing their workloads, the need for robust load balancing solutions becomes increasingly critical. Nginx is a versatile web server and reverse proxy that can also serve as a powerful Layer 4 load balancer.&lt;/p&gt;

&lt;p&gt;In this article, we delve into the intricacies of leveraging Nginx to distribute incoming requests across control plane nodes in your Kubernetes cluster. By harnessing Nginx’s stream module, we unlock the ability to perform Layer 4 load balancing, ensuring optimal traffic distribution and high availability for our applications and also a fixed registration address for rke2 nodes.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;we've installed Nginx on a Debian 12 with IP address 192.168.100.100. We also have 3 Server(Control plane nodes) and 3 Agent(Worker) nodes. In total, our machines are listed here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Control-Plane-1&lt;/strong&gt; =&amp;gt; IP: 192.168.100.11 , OS: Debian 12 , Hostname: kuber-master-1&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Control-Plane-2&lt;/strong&gt; =&amp;gt; IP: 192.168.100.12 , OS: Debian 12 , Hostname: kuber-master-2&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Control-Plane-3&lt;/strong&gt; =&amp;gt; IP: 192.168.100.13 , OS: Debian 12 , Hostname: kuber-master-3&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Worker-1&lt;/strong&gt; =&amp;gt; IP: 192.168.100.14 , OS: Debian 12 , Hostname: kuber-worker-1&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Worker-2&lt;/strong&gt; =&amp;gt; IP: 192.168.100.15 , OS: Debian 12 , Hostname: kuber-worker-2&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Worker-3&lt;/strong&gt; =&amp;gt; IP: 192.168.100.16 , OS: Debian 12 , Hostname: kuber-worker-3&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;L4 Load Balancer&lt;/strong&gt; =&amp;gt; ip: 192.168.100.100&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Install and Enable Stream module
&lt;/h3&gt;

&lt;p&gt;To use Nginx as a layer 4 load balancer, it must have the &lt;strong&gt;stream&lt;/strong&gt; module included or be able to dynamically use it. To check if Nginx can use Stream module, check Nginx configured modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nginx &lt;span class="nt"&gt;-V&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output will be something like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;configure arguments: &lt;span class="nt"&gt;--with-cc-opt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'-g -O2
-ffile-prefix-map=/build/nginx-AoTv4W/nginx-1.22.1=.
-fstack-protector-strong -Wformat -Werror=format-security
-fPIC -Wdate-time -D_FORTIFY_SOURCE=2'&lt;/span&gt;
&lt;span class="nt"&gt;--with-ld-opt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'-Wl,-z,relro -Wl,-z,now -fPIC'&lt;/span&gt; &lt;span class="nt"&gt;--prefix&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/share/nginx &lt;span class="nt"&gt;--conf-path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/etc/nginx/nginx.conf
&lt;span class="nt"&gt;--http-log-path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/log/nginx/access.log
&lt;span class="nt"&gt;--error-log-path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;stderr &lt;span class="nt"&gt;--lock-path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/lock/nginx.lock
&lt;span class="nt"&gt;--pid-path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/run/nginx.pid
&lt;span class="nt"&gt;--modules-path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/lib/nginx/modules
&lt;span class="nt"&gt;--http-client-body-temp-path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/lib/nginx/body
&lt;span class="nt"&gt;--http-fastcgi-temp-path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/lib/nginx/fastcgi
&lt;span class="nt"&gt;--http-proxy-temp-path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/lib/nginx/proxy
&lt;span class="nt"&gt;--http-scgi-temp-path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/lib/nginx/scgi
&lt;span class="nt"&gt;--http-uwsgi-temp-path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/lib/nginx/uwsgi &lt;span class="nt"&gt;--with-compat&lt;/span&gt;
&lt;span class="nt"&gt;--with-debug&lt;/span&gt; &lt;span class="nt"&gt;--with-pcre-jit&lt;/span&gt; &lt;span class="nt"&gt;--with-http_ssl_module&lt;/span&gt;
&lt;span class="nt"&gt;--with-http_stub_status_module&lt;/span&gt; &lt;span class="nt"&gt;--with-http_realip_module&lt;/span&gt;
&lt;span class="nt"&gt;--with-http_auth_request_module&lt;/span&gt; &lt;span class="nt"&gt;--with-http_v2_module&lt;/span&gt;
&lt;span class="nt"&gt;--with-http_dav_module&lt;/span&gt; &lt;span class="nt"&gt;--with-http_slice_module&lt;/span&gt;
&lt;span class="nt"&gt;--with-threads&lt;/span&gt; &lt;span class="nt"&gt;--with-http_addition_module&lt;/span&gt;
&lt;span class="nt"&gt;--with-http_flv_module&lt;/span&gt; &lt;span class="nt"&gt;--with-http_gunzip_module&lt;/span&gt;
&lt;span class="nt"&gt;--with-http_gzip_static_module&lt;/span&gt; &lt;span class="nt"&gt;--with-http_mp4_module&lt;/span&gt;
&lt;span class="nt"&gt;--with-http_random_index_module&lt;/span&gt;
&lt;span class="nt"&gt;--with-http_secure_link_module&lt;/span&gt; &lt;span class="nt"&gt;--with-http_sub_module&lt;/span&gt;
&lt;span class="nt"&gt;--with-mail_ssl_module&lt;/span&gt; &lt;span class="nt"&gt;--with-stream_ssl_module&lt;/span&gt;
&lt;span class="nt"&gt;--with-stream_ssl_preread_module&lt;/span&gt; &lt;span class="nt"&gt;--with-stream_realip_module&lt;/span&gt; &lt;span class="nt"&gt;--with-http_geoip_module&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dynamic
&lt;span class="nt"&gt;--with-http_image_filter_module&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dynamic
&lt;span class="nt"&gt;--with-http_perl_module&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dynamic
&lt;span class="nt"&gt;--with-http_xslt_module&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dynamic &lt;span class="nt"&gt;--with-mail&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dynamic
&lt;span class="nt"&gt;--with-stream&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dynamic &lt;span class="nt"&gt;--with-stream_geoip_module&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dynamic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check for the option &lt;code&gt;--with-stream=dynamic&lt;/code&gt;. This argument is necessary as it allows us to use the stream module in Nginx.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: If you can't find the &lt;code&gt;--with-stream=dynamic&lt;/code&gt; option, you've to recompile and install Nginx with this option.&lt;/p&gt;

&lt;p&gt;Now that Nginx supports stream module, we must install this module so as to use it in Nginx. Install the stream module using the command below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; libnginx-mod-stream
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the module is installed, we should include it in the Nginx config file. Go to &lt;code&gt;/etc/nginx/nginx.conf&lt;/code&gt; and include it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;user nginx&lt;span class="p"&gt;;&lt;/span&gt;
worker_processes  auto&lt;span class="p"&gt;;&lt;/span&gt;

load_module modules/ngx_stream_module.so&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c"&gt;### Other lines are emitted&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The modules are usually located in &lt;code&gt;/usr/lib/nginx/modules&lt;/code&gt;. If you want to verify it, use &lt;code&gt;nginx -V&lt;/code&gt; and check the output. There is the option &lt;code&gt;--modules-path&lt;/code&gt; that indicates the modules path.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Configure Stream in Nginx
&lt;/h3&gt;

&lt;p&gt;Now that the stream module has been installed and imported, it's time to use it.&lt;br&gt;
At the end of the Nginx main config file in &lt;code&gt;/etc/nginx/nginx.conf&lt;/code&gt; add the following lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stream &lt;span class="o"&gt;{&lt;/span&gt;
        log_format basic &lt;span class="s1"&gt;'$remote_addr [$time_local] '&lt;/span&gt;
                     &lt;span class="s1"&gt;'$protocol $status $bytes_sent $bytes_received '&lt;/span&gt;
                     &lt;span class="s1"&gt;'$session_time'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        include /etc/nginx/conf.d/&lt;span class="k"&gt;*&lt;/span&gt;.conf&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we've specified the log format of the stream module. It's also been specified that files in &lt;code&gt;/etc/nginx/conf.d&lt;/code&gt; ending with &lt;strong&gt;.conf&lt;/strong&gt; are operated in layer 4 and not layer 7.&lt;/p&gt;

&lt;p&gt;As the Nginx instance is supposed to be used for Rancher and Kubernetes, it should have two upstreams, one for rke2 and one for the kube-api server.&lt;/p&gt;

&lt;p&gt;First of all, we add configuration for Rancher. Create &lt;code&gt;/etc/nginc/conf.d/rke.conf&lt;/code&gt; and add the following lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;upstream rancher_nodes &lt;span class="o"&gt;{&lt;/span&gt;
        server 192.168.100.11:9345 &lt;span class="nv"&gt;fail_timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1s &lt;span class="nv"&gt;max_fails&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2&lt;span class="p"&gt;;&lt;/span&gt;
        server 192.168.100.12:9345 &lt;span class="nv"&gt;fail_timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1s &lt;span class="nv"&gt;max_fails&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2&lt;span class="p"&gt;;&lt;/span&gt;
        server 192.168.100.13:9345 &lt;span class="nv"&gt;fail_timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1s &lt;span class="nv"&gt;max_fails&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

server &lt;span class="o"&gt;{&lt;/span&gt;
    listen 9345&lt;span class="p"&gt;;&lt;/span&gt;

    access_log  /var/log/nginx/domains/rke/access.log basic&lt;span class="p"&gt;;&lt;/span&gt;

    proxy_pass rancher_nodes&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The rke2 service listens on port 9345 and here Nginx proxy passes to port 9345 on Control plane nodes.&lt;/p&gt;

&lt;p&gt;The next part will be the Kubernetes part. Again create &lt;code&gt;/etc/nginc/conf.d/k8s.conf&lt;/code&gt; and add the below content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;upstream kubernetes_nodes &lt;span class="o"&gt;{&lt;/span&gt;
        server 192.168.100.11:6443 &lt;span class="nv"&gt;fail_timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1s &lt;span class="nv"&gt;max_fails&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2&lt;span class="p"&gt;;&lt;/span&gt;
        server 192.168.100.12:6443 &lt;span class="nv"&gt;fail_timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1s &lt;span class="nv"&gt;max_fails&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2&lt;span class="p"&gt;;&lt;/span&gt;
        server 192.168.100.13:6443 &lt;span class="nv"&gt;fail_timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1s &lt;span class="nv"&gt;max_fails&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

server &lt;span class="o"&gt;{&lt;/span&gt;
    listen 6443&lt;span class="p"&gt;;&lt;/span&gt;

    access_log  /var/log/nginx/domains/k8s/access.log basic&lt;span class="p"&gt;;&lt;/span&gt;

    proxy_pass kubernetes_nodes&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The kube-api server operates on port 6443 So we've used reverse proxy for these ports on Control plane nodes.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Run Nginx
&lt;/h3&gt;

&lt;p&gt;Test Nginx configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nginx &lt;span class="nt"&gt;-t&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything is ok, Restart the Nginx service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl restart nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if you check open ports, there must be 6443 and 9345 present.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ss &lt;span class="nt"&gt;-ntlp&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;p&gt;Congratulations! Nginx from now on, should listen and load balance your requests to Control planes.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this article we made Nginx to operate as a layer 4 load balancer with the help of Stream module.&lt;br&gt;
We also covered how to use Nginx as a fixed registration address for Rancher rke2 and Kubernetes plus load balancing the requests to the Kubernetes API server.&lt;br&gt;
To get more info about using Nginx and integrating it with Rancher rke2, please refer to the official documentation of Nginx and Rancher:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://nginx.org/en/docs/stream/ngx_stream_core_module.html" rel="noopener noreferrer"&gt;https://nginx.org/en/docs/stream/ngx_stream_core_module.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.rke2.io/" rel="noopener noreferrer"&gt;https://docs.rke2.io/&lt;/a&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You could also visit my other article about deploying Rancher rke2 with Cilium and Metallb if you're interested:&lt;br&gt;
&lt;a href="https://dev.to/arman-shafiei/install-rke2-with-cilium-and-metallb-48a4"&gt;https://dev.to/arman-shafiei/install-rke2-with-cilium-and-metallb-48a4&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;Thank you for reading this article and please leave a comment if you have any suggestions or figured out an issue with this post.&lt;/p&gt;

</description>
      <category>rancher</category>
      <category>loadbalancer</category>
      <category>nginx</category>
    </item>
    <item>
      <title>Install RKE2 with Cilium and Metallb</title>
      <dc:creator>Arman Shafiei</dc:creator>
      <pubDate>Wed, 03 Apr 2024 15:55:46 +0000</pubDate>
      <link>https://forem.com/arman-shafiei/install-rke2-with-cilium-and-metallb-48a4</link>
      <guid>https://forem.com/arman-shafiei/install-rke2-with-cilium-and-metallb-48a4</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In today’s rapidly evolving technological landscape, container orchestration has become a critical component for managing complex applications. Kubernetes is an open-source platform that revolutionizes how we deploy, scale, and manage containerized workloads. Its advantages are manifold: it provides seamless scalability, high availability, and declarative configuration. With a thriving ecosystem and a robust community, Kubernetes empowers developers to focus on innovation rather than infrastructure intricacies.&lt;/p&gt;

&lt;p&gt;But what about simplicity and security? That’s where Rancher RKE2 steps in. This lightweight Kubernetes distribution offers a streamlined experience without compromising on safety. With a single binary installation, RKE2 simplifies setup and maintenance. Its built-in high availability, enhanced security features, and operator-friendly design make it an attractive choice for modern deployments. Whether you’re a seasoned DevOps engineer or a curious developer, both Kubernetes and RKE2 provide the tools needed to thrive in this dynamic era of software development.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;First things first, we need to prepare our machines in order for rke2 cluster to work properly.&lt;br&gt;
These include operating system, networking, and firewall that will be applied to all nodes.&lt;/p&gt;

&lt;p&gt;Also, we will be deploying our cluster on 6 Linux servers with the following specifications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Control-Plane-1&lt;/strong&gt; =&amp;gt; IP: 192.168.100.11 , OS: Debian 12 , Hostname: kuber-master-1&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Control-Plane-2&lt;/strong&gt; =&amp;gt; IP: 192.168.100.12 , OS: Debian 12 , Hostname: kuber-master-2&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Control-Plane-3&lt;/strong&gt; =&amp;gt; IP: 192.168.100.13 , OS: Debian 12 , Hostname: kuber-master-3&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Worker-1&lt;/strong&gt; =&amp;gt; IP: 192.168.100.14 , OS: Debian 12 , Hostname: kuber-worker-1&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Worker-2&lt;/strong&gt; =&amp;gt; IP: 192.168.100.15 , OS: Debian 12 , Hostname: kuber-worker-2&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Worker-3&lt;/strong&gt; =&amp;gt; IP: 192.168.100.16 , OS: Debian 12 , Hostname: kuber-worker-3&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;L4 Load Balancer&lt;/strong&gt; =&amp;gt; ip: 192.168.100.100&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: We've assumed that there is an external load balancer, which could be a cloud load balancer or an on-premise node. The load balancer is also accessible with the following domains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;rancher.arman-projects.com&lt;/li&gt;
&lt;li&gt;kubernetes.arman-projects.com&lt;/li&gt;
&lt;li&gt;rke2.arman-projects.com&lt;/li&gt;
&lt;li&gt;k8s.arman-projects.com

&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: In this article, we use the terms server node which refers to the control-plane node, and agent node which refers to the worker node. We use these terms interchangeably.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Operating System Pre-requisites
&lt;/h3&gt;

&lt;p&gt;The very basic thing to do is to update the servers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get upgrade &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;next reboot the server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h3&gt;
  
  
  Networking Pre-requisites
&lt;/h3&gt;

&lt;p&gt;First of all, we must enable 2 kernel modules.&lt;br&gt;
Run the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;modprobe br_netfilter
modprobe overlay
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | tee /etc/modules-load.d/k8s.conf
br_netfilter
overlay
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now configure sysctl to prepare OS for Kubernetes and of course for security purposes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | tee /etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.all.send_redirects=0
net.ipv4.conf.default.send_redirects=0
net.ipv4.conf.default.accept_source_route=0
net.ipv4.conf.all.accept_redirects=0
net.ipv4.conf.default.accept_redirects=0
net.ipv4.conf.all.log_martians=1
net.ipv4.conf.default.log_martians=1
net.ipv4.conf.all.rp_filter=1
net.ipv4.conf.default.rp_filter=1
net.ipv6.conf.all.accept_ra=0
net.ipv6.conf.default.accept_ra=0
net.ipv6.conf.all.accept_redirects=0
net.ipv6.conf.default.accept_redirects=0
kernel.keys.root_maxbytes=25000000
kernel.keys.root_maxkeys=1000000
kernel.panic=10
kernel.panic_on_oops=1
vm.overcommit_memory=1
vm.panic_on_oom=0
net.ipv4.ip_local_reserved_ports=30000-32767
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1
net.bridge.bridge-nf-call-ip6tables=1
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After sysctl is configured, &lt;br&gt;
For changes to take effect run sysctl command below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;sysctl &lt;span class="nt"&gt;--system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h3&gt;
  
  
  Firewall Pre-requisites
&lt;/h3&gt;

&lt;p&gt;There are some ports used by rke2 and Kubernetes services that must be opened between nodes. The list of ports can be found at "&lt;a href="https://docs.rke2.io/install/requirements" rel="noopener noreferrer"&gt;https://docs.rke2.io/install/requirements&lt;/a&gt;".&lt;br&gt;
For simplicity, we've allowed all traffic between cluster nodes and just allowed some ports access only through the Load balancer.&lt;br&gt;
First, flush all nftables rules.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nft flush ruleset
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then add these contents to the nftables default config file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/sbin/nft -f&lt;/span&gt;

flush ruleset

table inet filter &lt;span class="o"&gt;{&lt;/span&gt;
  chain input &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;type &lt;/span&gt;filter hook input priority filter&lt;span class="p"&gt;;&lt;/span&gt; policy accept&lt;span class="p"&gt;;&lt;/span&gt;
    tcp dport 22 accept&lt;span class="p"&gt;;&lt;/span&gt;
    ct state established,related accept&lt;span class="p"&gt;;&lt;/span&gt;
    iifname &lt;span class="s2"&gt;"lo"&lt;/span&gt; accept&lt;span class="p"&gt;;&lt;/span&gt;
    ip protocol icmp accept&lt;span class="p"&gt;;&lt;/span&gt;
    ip daddr 8.8.8.8 tcp dport 53 accept&lt;span class="p"&gt;;&lt;/span&gt;
    ip daddr 8.8.8.8 udp dport 53 accept&lt;span class="p"&gt;;&lt;/span&gt;
    ip daddr 8.8.4.4 tcp dport 53 accept&lt;span class="p"&gt;;&lt;/span&gt;
    ip daddr 8.8.4.4 udp dport 53 accept&lt;span class="p"&gt;;&lt;/span&gt;
    ip daddr 1.1.1.1 tcp dport 53 accept&lt;span class="p"&gt;;&lt;/span&gt;
    ip daddr 1.1.1.1 udp dport 53 accept&lt;span class="p"&gt;;&lt;/span&gt;
    ip saddr 192.168.100.11 accept&lt;span class="p"&gt;;&lt;/span&gt;
    ip saddr 192.168.100.12 accept&lt;span class="p"&gt;;&lt;/span&gt;
    ip saddr 192.168.100.13 accept&lt;span class="p"&gt;;&lt;/span&gt;
    ip saddr 192.168.100.14 accept&lt;span class="p"&gt;;&lt;/span&gt;
    ip saddr 192.168.100.15 accept&lt;span class="p"&gt;;&lt;/span&gt;
    ip saddr 192.168.100.16 accept&lt;span class="p"&gt;;&lt;/span&gt;
    ip saddr 192.168.100.100 tcp dport &lt;span class="o"&gt;{&lt;/span&gt;9345,6443,443,80&lt;span class="o"&gt;}&lt;/span&gt; accept&lt;span class="p"&gt;;&lt;/span&gt;
    ip saddr 192.168.100.100 udp dport &lt;span class="o"&gt;{&lt;/span&gt;9345,6443,443,80&lt;span class="o"&gt;}&lt;/span&gt; accept&lt;span class="p"&gt;;&lt;/span&gt;
    counter packets 0 bytes 0 drop&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  chain forward &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;type &lt;/span&gt;filter hook forward priority filter&lt;span class="p"&gt;;&lt;/span&gt; policy accept&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  chain output &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;type &lt;/span&gt;filter hook output priority filter&lt;span class="p"&gt;;&lt;/span&gt; policy accept&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the firewall config above, all traffic from cluster nodes is allowed to reach each other and just 4 ports are supposed to access the cluster through the Load balancer.&lt;br&gt;
&lt;strong&gt;NOTE&lt;/strong&gt;: The ssh port is 22 and it's benn opened from all sources.&lt;br&gt;
Enable and restart the nftables service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;nftables
systemctl restart nftables
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h2&gt;
  
  
  Install Rancher rke2
&lt;/h2&gt;

&lt;p&gt;Till now, we've prepared our machines. It's time to deploy rke2.&lt;br&gt;
The subsequent commands will configure and install Rancher rke2.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Control Plane 1
&lt;/h2&gt;


&lt;h3&gt;
  
  
  Set up Rancher configs
&lt;/h3&gt;

&lt;p&gt;Create Rancher configuration directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/rancher/rke2/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to change the default options and arguments of rke2. In order to do that, create a file named &lt;strong&gt;config.yaml&lt;/strong&gt; and put the below lines in it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;write-kubeconfig-mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0644"&lt;/span&gt;
&lt;span class="na"&gt;advertise-address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.100.100&lt;/span&gt;
&lt;span class="na"&gt;node-name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kuber-master-1&lt;/span&gt;
&lt;span class="na"&gt;tls-san&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;192.168.100.100&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;rancher.arman-projects.com&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;kubernetes.arman-projects.com&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;rke2.arman-projects.com&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;k8s.arman-projects.com&lt;/span&gt;
&lt;span class="na"&gt;cni&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;none&lt;/span&gt;
&lt;span class="na"&gt;cluster-cidr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.100.0.0/16&lt;/span&gt;
&lt;span class="na"&gt;service-cidr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.110.0.0/16&lt;/span&gt;
&lt;span class="na"&gt;cluster-dns&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.110.0.10&lt;/span&gt;
&lt;span class="na"&gt;cluster-domain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arman-projects.com&lt;/span&gt;
&lt;span class="na"&gt;etcd-arg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--quota-backend-bytes&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;2048000000"&lt;/span&gt;
&lt;span class="na"&gt;etcd-snapshot-schedule-cron&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;3&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*"&lt;/span&gt;
&lt;span class="na"&gt;etcd-snapshot-retention&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
&lt;span class="na"&gt;disable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;rke2-ingress-nginx&lt;/span&gt;
&lt;span class="na"&gt;disable-kube-proxy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;kube-apiserver-arg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--default-not-ready-toleration-seconds=30'&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--default-unreachable-toleration-seconds=30'&lt;/span&gt;
&lt;span class="na"&gt;kube-controller-manager-arg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--node-monitor-period=4s'&lt;/span&gt;
&lt;span class="na"&gt;kubelet-arg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--node-status-update-frequency=4s'&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--max-pods=100'&lt;/span&gt;
&lt;span class="na"&gt;egress-selector-mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;disabled&lt;/span&gt;
&lt;span class="na"&gt;protect-kernel-defaults&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's explain the above options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;write-kubeconfig-mode&lt;/strong&gt;: The permission of the generated kubeconfig file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;advertise-address&lt;/strong&gt;: Kubernetes API server address that all nodes must connect to.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;node-name&lt;/strong&gt;: A unique name for this worker node. This name is used by Rancher to identify node and must be unique. It's recommended to use the server hostname instead of a random name.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;tls-san&lt;/strong&gt;: Valid addresses for Kubernetes client certificate. kubectl trusts these addresses and any others are not trusted.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;cni&lt;/strong&gt;: The CNI plugin that must get installed. Here we put none which indicates no CNI should be installed.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;cluster-cidr&lt;/strong&gt;: The CIDR used in pods.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;service-cidr&lt;/strong&gt;: The CIDR used in services.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;cluster-dns&lt;/strong&gt;: Coredns service address.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;cluster-domain&lt;/strong&gt;: The Kubernetes cluster domain. The default value is cluster.local&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;etcd-arg&lt;/strong&gt;: etcd database arguments. Here we've just increased the default etcd allowed memory usage.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;etcd-snapshot-schedule-cron&lt;/strong&gt;: Specifies when to perform an etcd snapshot.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;etcd-snapshot-retention&lt;/strong&gt;: Specifies how many snapshots will be kept.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;disable&lt;/strong&gt;: instructs rke2 to not deploy the specified add-ons. Here we've disabled the nginx ingress add-on.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;disable-kube-proxy&lt;/strong&gt;: Whether or not to use kube-proxy for pod networking. Here we have disabled the kube proxy so as to use Cilium instead.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;kube-apiserver-arg&lt;/strong&gt;: Specifies kube api server arguments. Here we've set default-not-ready-toleration-seconds and default-unreachable-toleration-seconds to 30 seconds. The default value is 300 seconds, so in order to reschedule pods faster and maintain service availability, the default values have been decreased.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;kube-controller-manager-arg&lt;/strong&gt;: Specifies the interval of kubelet health monitoring time.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;kubelet-arg&lt;/strong&gt;: Sets kubelet arguments. node-status-update-frequency specifies the time in which node status is updated and max-pods is the maximum number of pods allowed to run on that node.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;egress-selector-mode&lt;/strong&gt;: Disable rke2 egress mode. By default this value is set to agent and Rancher rke2 servers establish a tunnel to communicate with nodes. This behavior is due to prevent opening several connections over and over. In some cases, enabling this mode will cause some routing issues in your cluster, so it's been disabled in our scenario.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;protect-kernel-defaults&lt;/strong&gt;: compare kubelet default parameters and OS kernel. If they're different, the container is killed.

&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  (Optional) Taint Control Plane nodes
&lt;/h3&gt;

&lt;p&gt;It's preferable to taint the Control Plane nodes so the workload pods won't get scheduled on those. To do so, edit &lt;code&gt;/etc/rancher/rke2/config.yaml&lt;/code&gt; and add the following line inside it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;node-taint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CriticalAddonsOnly=true:NoExecute"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h3&gt;
  
  
  (Optional) Set up offline files
&lt;/h3&gt;

&lt;p&gt;In order to speed up the bootstrapping process or to install Rancher in an air-gapped environment, we may want to download image files and put them in our nodes.&lt;br&gt;
&lt;strong&gt;NOTE&lt;/strong&gt;: Using offline files is useful but optional. In case you've got a poor internet connection or installing in an air-gapped environment, you should use the offline files.&lt;/p&gt;

&lt;p&gt;Go to "&lt;a href="https://github.com/rancher/rke2/releases" rel="noopener noreferrer"&gt;https://github.com/rancher/rke2/releases&lt;/a&gt;" and select a release. After that, you should download &lt;strong&gt;&lt;em&gt;rke2-images-core.linux-amd64.tar.gz&lt;/em&gt;&lt;/strong&gt;.&lt;br&gt;
Create the Rancher data directory to put the compressed offline images.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /var/lib/rancher/rke2/agent/images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Put the compressed files in the Rancher data directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mv  &lt;/span&gt;rke2-images-core.linux-amd64.tar.gz /var/lib/rancher/rke2/agent/images/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h3&gt;
  
  
  Install rke2
&lt;/h3&gt;

&lt;p&gt;Download rke2 installer script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sfL&lt;/span&gt; https://get.rke2.io &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; install_rke2.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Execute the binary with your desired arguments.&lt;br&gt;
Here we've specified two arguments, the rke2 version and installation type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod &lt;/span&gt;ug+x install_rke2.sh
&lt;span class="nv"&gt;INSTALL_RKE2_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"v1.29.4+rke2r1"&lt;/span&gt; &lt;span class="nv"&gt;INSTALL_RKE2_TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"server"&lt;/span&gt; ./install_rke2.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: In this tutorial, we've installed rke2 version 1.29.4. You can change it in your environment.&lt;/p&gt;

&lt;p&gt;Now you must have 2 new services available on your system, rke2-server and rke2-agent.&lt;/p&gt;

&lt;p&gt;For server nodes, we just need rke2-server, so you should disable and mask the rke2-agent service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl disable rke2-agent &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; systemctl mask rke2-agent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's time to initiate our first cluster node. Start rke2-server service. The rke2-server service will bootstrap your control-plane node.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; rke2-server.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: This will take some time depending on your network bandwidth.&lt;/p&gt;

&lt;p&gt;If no error occurred, verify the rke2 status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl status rke2-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In case a problem has shown up, check logs to troubleshoot it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;journalctl &lt;span class="nt"&gt;-u&lt;/span&gt; rke2-server &lt;span class="nt"&gt;-f&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check the pods in the cluster, you'll need to use kubectl.&lt;br&gt;
The Rancher installation has downloaded the necessary binaries(e.g. kubectl, ctr, containerd,...) and put them in /var/lib/rancher/rke2/bin. Add the path to the default OS path(Let's assume you're using bash).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'PATH=$PATH:/var/lib/rancher/rke2/bin'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.bashrc
&lt;span class="nb"&gt;source&lt;/span&gt; ~/.bashrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, Rancher has generated &lt;strong&gt;KUBECONFIG&lt;/strong&gt; which can be used by kubectl. By default, the KUBECONFIG has been set to use 127.0.0.1 as the API server which in our case should be set to an external address that is our load balancer(i.e. 192.168.100.100). Proceed with the subsequent steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; ~/.kube
&lt;span class="nb"&gt;cp&lt;/span&gt; /etc/rancher/rke2/rke2.yaml ~/.kube/config
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/127.0.0.1/192.168.100.100/g'&lt;/span&gt; ~/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you cat &lt;code&gt;~/.kube/config&lt;/code&gt;, you'll have something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasgv4z41iammn30rwuq5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasgv4z41iammn30rwuq5.png" alt="Image description" width="800" height="216"&gt;&lt;/a&gt;&lt;br&gt;
You should now be able to use kubectl to check the status of your cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: If you use kubectl, you'll see that nodes are in &lt;strong&gt;NotReady&lt;/strong&gt; state and the reason for that, is due to the absence of a CNI plugin.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Install Cilium
&lt;/h3&gt;

&lt;p&gt;We're ready to install Cilium as the CNI in our cluster. &lt;/p&gt;

&lt;p&gt;The first step is to install Cilium CLI. To do so run the following commands to download and install the CLI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;CILIUM_CLI_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;CLI_ARCH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;amd64
curl &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="nt"&gt;--remote-name-all&lt;/span&gt; https://github.com/cilium/cilium-cli/releases/download/&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CILIUM_CLI_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/cilium-linux-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CLI_ARCH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.tar.gz&lt;span class="o"&gt;{&lt;/span&gt;,.sha256sum&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="nb"&gt;sha256sum&lt;/span&gt; &lt;span class="nt"&gt;--check&lt;/span&gt; cilium-linux-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CLI_ARCH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.tar.gz.sha256sum
&lt;span class="nb"&gt;tar &lt;/span&gt;xzvfC cilium-linux-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CLI_ARCH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.tar.gz /usr/local/bin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: As of this day, our Cilium stable version is 1.15.4 and the version of Cilium CLI is 0.16.6.&lt;/p&gt;

&lt;p&gt;Add the following config to a file named &lt;strong&gt;cilium.yaml&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cluster:
  name: cluster-1
  &lt;span class="nb"&gt;id&lt;/span&gt;: 10
prometheus:
  enabled: &lt;span class="nb"&gt;true
  &lt;/span&gt;serviceMonitor:
    enabled: &lt;span class="nb"&gt;false
&lt;/span&gt;dashboards:
  enabled: &lt;span class="nb"&gt;true
&lt;/span&gt;hubble:
  metrics:
    enabled:
    - dns:query&lt;span class="p"&gt;;&lt;/span&gt;ignoreAAAA
    - drop
    - tcp
    - flow
    - icmp
    - http
    dashboards:
      enabled: &lt;span class="nb"&gt;true
  &lt;/span&gt;relay:
    enabled: &lt;span class="nb"&gt;true
    &lt;/span&gt;prometheus:
      enabled: &lt;span class="nb"&gt;true
  &lt;/span&gt;ui:
    enabled: &lt;span class="nb"&gt;true
    &lt;/span&gt;baseUrl: &lt;span class="s2"&gt;"/"&lt;/span&gt;
version: 1.15.4
operator:
  prometheus:
    enabled: &lt;span class="nb"&gt;true
  &lt;/span&gt;dashboards:
    enabled: &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: The Version of Cilium is 1.15.4. You can change it if there is a newer version, but be cautious to see the changelog of each version.&lt;/p&gt;

&lt;p&gt;Apply the config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cilium &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; cilium.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the status of Cilium:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cilium status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now your node should be in &lt;strong&gt;ready&lt;/strong&gt; state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, Cilium uses its own IP CIDR for pods and not the one configured during Cluster bootstrapping. To change this behavior, edit the Cilium config map:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system edit cm cilium-config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is a line like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;ipam&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the value to "kubernetes":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;ipam&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save your changes and exit.&lt;/p&gt;

&lt;p&gt;The already running pods are still using the default Cilium ipam. To apply yours, either restart the server or restart Cilium resources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system rollout restart deployment cilium-operator
kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system rollout restart ds cilium
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: check other pods too. If they're using the old IPs, restart them too.&lt;/p&gt;

&lt;p&gt;Our first Control plane node is ready. &lt;/p&gt;

&lt;p&gt;Next, we'll proceed to join the other nodes to our initialized cluster.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Control Plane 2 &amp;amp; Control Plane 3
&lt;/h2&gt;

&lt;p&gt;The procedure to join other server nodes is mostly the same as bootstrapping the first server node. The only difference is when we set which server node our new node should join to. So you should be able to follow the tutorial for all of the Server nodes 2, 3, etc.&lt;/p&gt;

&lt;p&gt;Let's go through the steps&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Set up Rancher configs
&lt;/h3&gt;

&lt;p&gt;We need to change the default options and arguments of rke2.&lt;/p&gt;

&lt;p&gt;Create the Rancher configuration directory and create and edit the Rancher config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/rancher/rke2/
vim /etc/rancher/rke2/config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The configuration options are the same as the first server node. The only difference is the first 2 lines: &lt;strong&gt;server&lt;/strong&gt; and &lt;strong&gt;token&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;server&lt;/strong&gt;: This is a fixed registration address. It's a layer 4 load balancer and will distribute requests on our server nodes(the nodes that are running rke2-server service).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;token&lt;/strong&gt;: The registration address. It's been generated by the first server node we bootstraped the cluster. The token value can be obtained from &lt;code&gt;/var/lib/rancher/rke2/server/node-token&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Add these lines to the config.yaml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://192.168.100.100:9345&lt;/span&gt;
&lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;XXXXXXXXXX&lt;/span&gt;
&lt;span class="na"&gt;node-name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;node-name&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;.&lt;/span&gt;
&lt;span class="s"&gt;.&lt;/span&gt;
&lt;span class="s"&gt;.&lt;/span&gt;
&lt;span class="s"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: We've omitted the next lines as they're the same as the first node.&lt;br&gt;
&lt;strong&gt;NOTE&lt;/strong&gt;: Replace the &lt;code&gt;&amp;lt;node-name&amp;gt;&lt;/code&gt; with the relevant hostnames of servers.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  (Optional) Set up offline files
&lt;/h3&gt;

&lt;p&gt;Go to "&lt;a href="https://github.com/rancher/rke2/releases" rel="noopener noreferrer"&gt;https://github.com/rancher/rke2/releases&lt;/a&gt;" and download &lt;strong&gt;&lt;em&gt;rke2-images-core.linux-amd64.tar.gz&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Create the Rancher data directory to put the compressed offline images.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /var/lib/rancher/rke2/agent/images
&lt;span class="nb"&gt;mv  &lt;/span&gt;rke2-images-core.linux-amd64.tar.gz /var/lib/rancher/rke2/agent/images/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h3&gt;
  
  
  Install rke2
&lt;/h3&gt;

&lt;p&gt;Download rke2 installer script. Make it executable and install the rke2 server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sfL&lt;/span&gt; https://get.rke2.io &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; install_rke2.sh
&lt;span class="nb"&gt;chmod &lt;/span&gt;ug+x install_rke2.sh
&lt;span class="nv"&gt;INSTALL_RKE2_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"v1.29.4+rke2r1"&lt;/span&gt; &lt;span class="nv"&gt;INSTALL_RKE2_TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"server"&lt;/span&gt; ./install_rke2.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Disable and mask rke2-agent service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl disable rke2-agent &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; systemctl mask rke2-agent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start rke2-server service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; rke2-server.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If no error occurred, verify the Rancher status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl status rke2-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In case a problem has arised, check logs to troubleshoot it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;journalctl &lt;span class="nt"&gt;-u&lt;/span&gt; rke2-server &lt;span class="nt"&gt;-f&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h2&gt;
  
  
  Worker 1
&lt;/h2&gt;

&lt;p&gt;It's time to join our first agent(worker) node to the cluster. All Kubernetes workloads are handled by our worker nodes.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Set up Rancher configs
&lt;/h3&gt;

&lt;p&gt;Create the Rancher configuration directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/rancher/rke2/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to change the default options and arguments of the rke2 agent. For that purpose, create the Rancher config file named &lt;strong&gt;config.yaml&lt;/strong&gt; and put the below lines in it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://192.168.100.100:9345&lt;/span&gt;
&lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;XXXXXXXXXX&lt;/span&gt;
&lt;span class="na"&gt;node-name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kuber-worker-1&lt;/span&gt;
&lt;span class="na"&gt;kubelet-arg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--node-status-update-frequency=4s'&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--max-pods=100'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's explain the above options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;server&lt;/strong&gt;: As we mentioned in previous sections, This is a fixed registration address. It's the layer 4 load balancer and will distribute requests on our server nodes(the nodes that are running the rke2-server service).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;token&lt;/strong&gt;: The registration address. It's been generated by the first server node we bootstraped the cluster. The token value can be obtained from &lt;code&gt;/var/lib/rancher/rke2/server/node-token&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kubelet-arg&lt;/strong&gt;: Kubelet specific arguments. As you've seen, all server nodes have the same kubelet arguments as this agent node. The kubelet arguments are the same across the cluster. Here we've specified two arguments for kubelet, one for the frequency of status updates reported to the kube-api-server and the other one for the maximum allowed pods on a single node.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  (Optional) Set up offline files
&lt;/h3&gt;

&lt;p&gt;Like the previous parts, we can provide offline files to install in an air-gapped environment or to make bootstrapping faster.&lt;/p&gt;

&lt;p&gt;Go to "&lt;a href="https://github.com/rancher/rke2/releases" rel="noopener noreferrer"&gt;https://github.com/rancher/rke2/releases&lt;/a&gt;" and select a release. After that you should download &lt;strong&gt;&lt;em&gt;rke2-images-core.linux-amd64.tar.gz&lt;/em&gt;&lt;/strong&gt;.&lt;br&gt;
Create Rancher data directory to put the compressed offline images.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /var/lib/rancher/rke2/agent/images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Put the compressed files in the Rancher data directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mv  &lt;/span&gt;rke2-images-core.linux-amd64.tar.gz /var/lib/rancher/rke2/agent/images/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h3&gt;
  
  
  Install rke2
&lt;/h3&gt;

&lt;p&gt;Download the rke2 installer script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sfL&lt;/span&gt; https://get.rke2.io &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; install_rke2.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Execute the binary with your arguments. Note that as we're installing on the agent node, you must specify &lt;strong&gt;agent&lt;/strong&gt; mode in the arguments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod &lt;/span&gt;ug+x install_rke2.sh
&lt;span class="nv"&gt;INSTALL_RKE2_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"v1.29.4+rke2r1"&lt;/span&gt; &lt;span class="nv"&gt;INSTALL_RKE2_TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"agent"&lt;/span&gt; ./install_rke2.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Disable and mask the rke2-server service as there is no need for it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl disable rke2-server &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; systemctl mask rke2-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we're about to join the first agent node to the cluster. Start rke2-agent service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; rke2-agent.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: This will take some time depending on your network bandwidth.&lt;/p&gt;

&lt;p&gt;If no error occurred, verify the Rancher status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl status rke2-agent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In case an error has shown, check logs to troubleshoot it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;journalctl &lt;span class="nt"&gt;-u&lt;/span&gt; rke2-agent &lt;span class="nt"&gt;-f&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h2&gt;
  
  
  Worker 2 &amp;amp; Worker 3
&lt;/h2&gt;

&lt;p&gt;The instructions on how to join the second and other worker nodes are exactly the same as the first one. The only difference is the node name.&lt;/p&gt;

&lt;p&gt;In the Rancher configuration file located in &lt;code&gt;/etc/rancher/rke2/config.yaml&lt;/code&gt;, the &lt;strong&gt;node-name&lt;/strong&gt; option must be unique and different from the other nodes(e.g. kuber-worker-2, kuber-worker-3). For other parts, proceed with &lt;strong&gt;Worker 1&lt;/strong&gt; guideline.&lt;/p&gt;

&lt;p&gt;Congratulations! Now we've got a working Rancher Kubernetes cluster with 3 control plane and 3 worker nodes. &lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Deploy MetalLB
&lt;/h2&gt;

&lt;p&gt;The last part is about deploying Metallb as a load balancer and IP pool management service. If you've got a cloud load balancer, then skip the rest of the tutorial.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Install MetalLB
&lt;/h3&gt;

&lt;p&gt;For this part, we'll be using Helm to deploy MetalLB. The first thing to do is to add the MetalLB helm repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add metallb https://metallb.github.io/metallb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now Install MetalLB:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt; metallb-system metallb metallb/metallb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the pods are all in running state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; metallb-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h3&gt;
  
  
  Configure IP Address Pool
&lt;/h3&gt;

&lt;p&gt;Let's assume we have a range IP starting from 192.168.100.50 and ending with 192.168.100.55. To create a pool for this range and have MetalLB assign IPs to services, create a yaml file(e.g. main-pool.yaml) and add the below content to it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metallb.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IPAddressPool&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main-pool&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metallb-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;avoidBuggyIPs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;192.168.100.50-192.168.100.55&lt;/span&gt;
  &lt;span class="na"&gt;serviceAllocation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;serviceSelectors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;ip-pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main-pool&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: We've provided &lt;strong&gt;serviceSelectors&lt;/strong&gt; section to only assign IPs to services that have the matching label.&lt;/p&gt;

&lt;p&gt;Apply the pool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; main-pool.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;MetalLB will assign IP to any service of type LoadBalancer that has the label "main-pool".&lt;br&gt;
The only remaining part is to advertise our IP pool in the entire cluster. Use &lt;strong&gt;L2Advertisement&lt;/strong&gt; kind provided by MetalLB to do so.&lt;/p&gt;

&lt;p&gt;Create a yaml file(e.g. l2advertisement.yaml) and add the following manifest in it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metallb.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;L2Advertisement&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main-advertisement&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metallb-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ipAddressPools&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main-pool&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: If there was more than one pool, we could've advertised them all by this manifest and passed the pool names to &lt;strong&gt;ipAddressPools&lt;/strong&gt; section as a list.&lt;/p&gt;

&lt;p&gt;Apply the manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; l2advertisement.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Everything is ready for you to deploy your workload into this cluster.&lt;br&gt;
Good luck:)&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this essay, we showed how to use Rancher rke2 to deploy a Kubernetes cluster with 6 Debian nodes with firewall enabled. We've also covered deploying Cilium as a CNI for our cluster and have it completely replace kube-proxy so as to increase speed and gain more observability via Cilium tools.&lt;br&gt;
This article also showed how to deploy Metallb to manage IP pools and load balance traffic for those IP pools.&lt;br&gt;
Throughout this guide, we assumed that we have an external load balancer that will distribute traffic to our workload and control plane nodes.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;For further information please visit rke2 and MetalLB official documents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.rke2.io/" rel="noopener noreferrer"&gt;https://docs.rke2.io/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://metallb.universe.tf/configuration/" rel="noopener noreferrer"&gt;https://metallb.universe.tf/configuration/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://artifacthub.io/packages/helm/metallb/metallb" rel="noopener noreferrer"&gt;https://artifacthub.io/packages/helm/metallb/metallb&lt;/a&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You may also want to visit my other article about deploying Rancher rke2 with Cilium and Metallb if you're interested:&lt;br&gt;
&lt;a href="https://dev.to/arman-shafiei/deploy-nginx-load-balancer-for-rancher-lgk"&gt;https://dev.to/arman-shafiei/deploy-nginx-load-balancer-for-rancher-lgk&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;Thank you for reading this post and please leave a comment if you have any suggestions or figured out an issue with this article.&lt;/p&gt;

</description>
      <category>rancher</category>
      <category>metallb</category>
      <category>cilium</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Write Helm chart from scratch</title>
      <dc:creator>Arman Shafiei</dc:creator>
      <pubDate>Mon, 01 Apr 2024 10:05:16 +0000</pubDate>
      <link>https://forem.com/arman-shafiei/deploy-helm-chart-from-scratch-aaa</link>
      <guid>https://forem.com/arman-shafiei/deploy-helm-chart-from-scratch-aaa</guid>
      <description>&lt;p&gt;These days Kubernetes has become so popular and of course more complex. Deploying and managing it has become harder and requires lots of time and effort. One of the most useful tools that can help us to make it easy for us is &lt;strong&gt;Helm&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Helm is known as a package manager for Kubernetes. You can use it to deploy a large Kubernetes application with just one install command. It makes our deployments easier and more beneficial than just using simple the &lt;strong&gt;kubectl&lt;/strong&gt; command to deploy everything by ourselves.&lt;/p&gt;

&lt;p&gt;We usually use charts that are already available in Helm provided by other developers, but sometimes you need to write your own charts, That’s what this article is about. We’re going to write our chart from scratch and deploy it on our Kubernetes Cluster.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;For this article, we will need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Redhat-based or Debian-based OS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;K8s cluster version 1.24&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helm version 3.9&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Nginx Ingress version 4.2&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access to the internet&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; The requirements mentioned above aren’t mandatory, however, this article has been tested and done by those tools.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Verify requirements
&lt;/h2&gt;

&lt;p&gt;The first thing we’re going to do is to check the version of the tools.&lt;/p&gt;

&lt;p&gt;Run the commands and check the output.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# cat /etc/*-release&lt;/span&gt;
CentOS Linux release 7.9.2009 &lt;span class="o"&gt;(&lt;/span&gt;Core&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"CentOS Linux"&lt;/span&gt;
&lt;span class="nv"&gt;VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"7 (Core)"&lt;/span&gt;
&lt;span class="nv"&gt;ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"centos"&lt;/span&gt;
&lt;span class="nv"&gt;ID_LIKE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"rhel fedora"&lt;/span&gt;
&lt;span class="nv"&gt;VERSION_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"7"&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt;
CentOS Linux release 7.9.2009 &lt;span class="o"&gt;(&lt;/span&gt;Core&lt;span class="o"&gt;)&lt;/span&gt;
CentOS Linux release 7.9.2009 &lt;span class="o"&gt;(&lt;/span&gt;Core&lt;span class="o"&gt;)&lt;/span&gt;

kubeadm version
kubeadm version: &amp;amp;version.Info&lt;span class="o"&gt;{&lt;/span&gt;Major:&lt;span class="s2"&gt;"1"&lt;/span&gt;, Minor:&lt;span class="s2"&gt;"24"&lt;/span&gt;, GitVersion:&lt;span class="s2"&gt;"v1.24.3"&lt;/span&gt;, GitCommit:&lt;span class="s2"&gt;"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb"&lt;/span&gt;, GitTreeState:&lt;span class="s2"&gt;"clean"&lt;/span&gt;, BuildDate:&lt;span class="s2"&gt;"2022-07-13T14:29:09Z"&lt;/span&gt;, GoVersion:&lt;span class="s2"&gt;"go1.18.3"&lt;/span&gt;, Compiler:&lt;span class="s2"&gt;"gc"&lt;/span&gt;, Platform:&lt;span class="s2"&gt;"linux/amd64"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;#############################################################&lt;/span&gt;

helm version
version.BuildInfo&lt;span class="o"&gt;{&lt;/span&gt;Version:&lt;span class="s2"&gt;"v3.9.2"&lt;/span&gt;, GitCommit:&lt;span class="s2"&gt;"1addefbfe665c350f4daf868a9adc5600cc064fd"&lt;/span&gt;, GitTreeState:&lt;span class="s2"&gt;"clean"&lt;/span&gt;, GoVersion:&lt;span class="s2"&gt;"go1.17.12"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Also, we have two nodes in our cluster.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#------------#----------#-----------------#
|    Host    |   Role   |   IP Address    |
#------------#----------#-----------------#
|   node-1   |  master  |  192.168.24.66  |
|   node-2   |  worker  |  192.168.24.118 |
#------------#----------#-----------------#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create Helm files
&lt;/h2&gt;

&lt;p&gt;First, create a directory and name it test. This is the name of our Helm project.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~
&lt;span class="nb"&gt;mkdir test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Second, create two YAML files named &lt;strong&gt;Chart.yaml&lt;/strong&gt; and &lt;strong&gt;values.yaml&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;touch &lt;/span&gt;values.yaml Chart.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Third, Create &lt;strong&gt;“.helmignore”&lt;/strong&gt; file to ignore files we don’t need in our app.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;touch&lt;/span&gt; .helmignore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Fourth, go to this test directory and create two other directories: &lt;strong&gt;charts&lt;/strong&gt; and &lt;strong&gt;templates&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;charts templates
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Finally, create YAML files for our app inside the templates directory.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;templates
&lt;span class="nb"&gt;touch &lt;/span&gt;deployment.yaml service.yaml ingress.yaml&lt;span class="se"&gt;\&lt;/span&gt;
      configmap.yaml secret.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Write files
&lt;/h2&gt;



&lt;h3&gt;
  
  
  Write deployment.yaml
&lt;/h3&gt;

&lt;p&gt;Open &lt;strong&gt;deployment.yaml&lt;/strong&gt; file inside the &lt;strong&gt;templates&lt;/strong&gt; directory and add these lines to it.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Chart.Name&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Chart.Name&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Chart.AppVersion&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/managed-by&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;helm&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Values.replicas&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Chart.Name&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Chart.Name&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;python&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker.io/armanshafiei/python-command-executor:latest&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
           &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Values.config.port&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
             &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
          &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/"&lt;/span&gt;
              &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Values.config.port&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
            &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
            &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
          &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;- include "vol_conf_mount" . | nindent 10&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
      &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;- include "vol_conf_define" . | nindent 6&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Assuming You know how to write Kubernetes manifests, let’s explain some parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Line 4: &lt;strong&gt;“.Chart.Name”&lt;/strong&gt; adds the value of the variable &lt;strong&gt;name&lt;/strong&gt; we’re going to define in &lt;strong&gt;Chart.yaml&lt;/strong&gt; file. It is the name of our Chart or App.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Line 7: &lt;strong&gt;“.Chart.AppVersion”&lt;/strong&gt; adds the value of variable &lt;strong&gt;appVersion&lt;/strong&gt; from &lt;strong&gt;Chart.yaml&lt;/strong&gt;. It is the version of our App.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Line 8: Defines that this deployment is managed and created by Helm.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Line 10: &lt;strong&gt;“.Values.replicas”&lt;/strong&gt; Adds the value of the &lt;strong&gt;replicas&lt;/strong&gt; variable that we’ll define in &lt;strong&gt;values.yaml&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Line 32: This imports the function &lt;strong&gt;vol_conf_mount&lt;/strong&gt; from &lt;strong&gt;_helpers.tpl&lt;/strong&gt; and adds 10 spaces as indention. The reason is even though we’ve added indentions before &lt;strong&gt;include&lt;/strong&gt;, it doesn’t have any effect. This function enables or disables the &lt;strong&gt;volumeMounts&lt;/strong&gt; option for deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Line 33: This imports the function &lt;strong&gt;vol_conf_define&lt;/strong&gt; from &lt;strong&gt;_helpers.tpl&lt;/strong&gt; and adds 6 spaces as indention. This function enables or disables the &lt;strong&gt;volumes&lt;/strong&gt; definition for deployment.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Write service.yaml
&lt;/h3&gt;

&lt;p&gt;Open &lt;strong&gt;service.yaml&lt;/strong&gt; inside the templates directory and these lines to it.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Chart.Name&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Chart.Name&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;data&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Values.config.port&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Values.config.port&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let’s explain some lines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Line 6: This service is of type &lt;strong&gt;LoadBalancer&lt;/strong&gt; So that it will be assigned an IP address, Either via a cloud provider or your external load balancer(e.g. Metallb).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Line 8: Use the label of deployment to connect the service to it.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Write ingress.yaml
&lt;/h3&gt;

&lt;p&gt;Edit the &lt;strong&gt;ingress.yaml&lt;/strong&gt; file inside the templates directory and add these lines to it.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Chart.Name&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Chart.Name&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Chart.AppVersion&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/managed-by&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;helm&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ingressClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my.example.com"&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/"&lt;/span&gt;
            &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
            &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Chart.Name&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
                &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Values.config.port&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In Line 10, we have defined the ingress class. It is necessary as we are using Nginx Ingress otherwise it won’t work as expected.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Write configmap.yaml
&lt;/h3&gt;

&lt;p&gt;Edit &lt;strong&gt;configmap.yaml&lt;/strong&gt; inside the templates directory and add these lines.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.Chart.Name&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;bind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;quote .Values.config.bind&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;quote .Values.config.port&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;strong&gt;configmap.yaml&lt;/strong&gt; is not necessary and doesn’t have any effect on our app because our Python app is not using any environment variables. The only reason I added it is to demonstrate the usability of config maps in Helm.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Write _helpers.tpl
&lt;/h3&gt;

&lt;p&gt;This file is where we define Helm functions that our charts can use. As you saw above, we used two functions in our &lt;strong&gt;deployment.yaml&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Edit &lt;strong&gt;_helpers.tpl&lt;/strong&gt; inside the templates directory and add these lines.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="err"&gt;{{/*&lt;/span&gt;
&lt;span class="err"&gt;Put&lt;/span&gt; &lt;span class="err"&gt;all&lt;/span&gt; &lt;span class="err"&gt;functions&lt;/span&gt; &lt;span class="err"&gt;here&lt;/span&gt; &lt;span class="err"&gt;please!&lt;/span&gt;
&lt;span class="err"&gt;*/}}&lt;/span&gt;
&lt;span class="err"&gt;{{-&lt;/span&gt; &lt;span class="err"&gt;define&lt;/span&gt; &lt;span class="err"&gt;"vol_conf_mount"&lt;/span&gt; &lt;span class="err"&gt;}}&lt;/span&gt;
&lt;span class="err"&gt;{{-&lt;/span&gt; &lt;span class="err"&gt;if&lt;/span&gt; &lt;span class="err"&gt;eq&lt;/span&gt; &lt;span class="err"&gt;.Values.custom_config&lt;/span&gt; &lt;span class="err"&gt;"&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="err"&gt;"&lt;/span&gt; &lt;span class="err"&gt;-}}&lt;/span&gt;
&lt;span class="err"&gt;volumeMounts:&lt;/span&gt;
            &lt;span class="err"&gt;-&lt;/span&gt; &lt;span class="err"&gt;name:&lt;/span&gt; &lt;span class="err"&gt;configs&lt;/span&gt;
              &lt;span class="err"&gt;mountPath:&lt;/span&gt; &lt;span class="err"&gt;"/usr/app/envs"&lt;/span&gt;
              &lt;span class="err"&gt;readOnly:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="err"&gt;{{-&lt;/span&gt; &lt;span class="err"&gt;end&lt;/span&gt; &lt;span class="err"&gt;}}&lt;/span&gt;
&lt;span class="err"&gt;{{-&lt;/span&gt; &lt;span class="err"&gt;end&lt;/span&gt; &lt;span class="err"&gt;}}&lt;/span&gt;
&lt;span class="err"&gt;{{-&lt;/span&gt; &lt;span class="err"&gt;define&lt;/span&gt; &lt;span class="err"&gt;"vol_conf_define"&lt;/span&gt; &lt;span class="err"&gt;}}&lt;/span&gt;
&lt;span class="err"&gt;{{-&lt;/span&gt; &lt;span class="err"&gt;if&lt;/span&gt; &lt;span class="err"&gt;eq&lt;/span&gt; &lt;span class="err"&gt;.Values.custom_config&lt;/span&gt; &lt;span class="err"&gt;"&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="err"&gt;"&lt;/span&gt; &lt;span class="err"&gt;-}}&lt;/span&gt;
&lt;span class="err"&gt;volumes:&lt;/span&gt;
          &lt;span class="err"&gt;-&lt;/span&gt; &lt;span class="err"&gt;name:&lt;/span&gt; &lt;span class="err"&gt;configs&lt;/span&gt;
            &lt;span class="err"&gt;configMap:&lt;/span&gt;
              &lt;span class="err"&gt;name:&lt;/span&gt; &lt;span class="err"&gt;{{&lt;/span&gt; &lt;span class="err"&gt;.Chart.Name&lt;/span&gt; &lt;span class="err"&gt;}}&lt;/span&gt;
&lt;span class="err"&gt;{{-&lt;/span&gt; &lt;span class="err"&gt;end&lt;/span&gt; &lt;span class="err"&gt;}}&lt;/span&gt;
&lt;span class="err"&gt;{{-&lt;/span&gt; &lt;span class="err"&gt;end&lt;/span&gt; &lt;span class="err"&gt;}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you see, there are two functions in our file: &lt;strong&gt;vol_conf_mount&lt;/strong&gt; and &lt;strong&gt;vol_conf_define&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;vol_conf_mount&lt;/strong&gt; function starting from lines 5 to 12, contains the &lt;strong&gt;volumeMounts&lt;/strong&gt; block. If the value of &lt;strong&gt;custom_config&lt;/strong&gt; in &lt;strong&gt;values.yaml&lt;/strong&gt; equals &lt;strong&gt;true&lt;/strong&gt; then this function returns &lt;strong&gt;volumeMounts&lt;/strong&gt; block. So we can use it to import the &lt;strong&gt;volumeMounts&lt;/strong&gt; to our manifest.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;vol_conf_define&lt;/strong&gt; function starting from lines 14 to 21, contains the volumes block. If the value of &lt;strong&gt;custom_config&lt;/strong&gt; in &lt;strong&gt;values.yaml&lt;/strong&gt; equals &lt;strong&gt;true&lt;/strong&gt;, then this function returns the &lt;strong&gt;volumes&lt;/strong&gt; block. So we can use it to import the volumes to our manifest.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; The first block starting from lines 1 to 3, is used for comments and doesn’t have any effect on our application performance. You can write anything inside it depending on your charts or functions.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Write Chart.yaml
&lt;/h3&gt;

&lt;p&gt;This file contains information about your Helm chart itself like the version, name, apiVersion, type, etc. Each of the variables can be referenced in the manifests by using &lt;strong&gt;“.Chart.*”&lt;/strong&gt; like &lt;strong&gt;“.Chart.Name”&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Open &lt;strong&gt;Chart.yaml&lt;/strong&gt; file and add these lines.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v2&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;python"&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;This helm chart is created from scratch&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.0.0&lt;/span&gt;
&lt;span class="na"&gt;appVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.0.0"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let’s see what the values above are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Line 1: Specifies which API version Helm will use. For Helm 3 it must be v2.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Line 2: The name of our Helm chart. It can be anything related to our app functionality.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Line 3: A description of our chart.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Line 4: Defines the type of chart. It can be either &lt;strong&gt;application&lt;/strong&gt; or &lt;strong&gt;library&lt;/strong&gt;. The library type can not be deployed and is just a dependency that can be injected into other charts while the application type can be deployed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Line 5: Specifies the version of the chart. Each time we change the app version or change the templates we should increase this value.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Line 6: Specifies the version of the app. Each time we change the version of the app or change its code, we should increase this value. It’s recommended to use double quotes for this value.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Write values.yaml
&lt;/h3&gt;

&lt;p&gt;If we need to use a variable inside our templates it shall be defined here. We define the variables and assign them a value here, so this way we don’t need to change our templates each time we need to change our ports or environment variables.&lt;/p&gt;

&lt;p&gt;Open the file &lt;strong&gt;values.yaml&lt;/strong&gt; and add these lines.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
&lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;bind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.0.0.0&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5000&lt;/span&gt;
&lt;span class="na"&gt;custom_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you saw earlier in the templates, we used some variables which are defined here.&lt;/p&gt;

&lt;p&gt;Let's say you want to access the bind variable in line 3 inside our manifest. To do so, we go like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;.values.config.bind&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;&lt;span class="nv"&gt;&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Write .helmignore
&lt;/h3&gt;

&lt;p&gt;Let’s say you are using an editor like Vscode, of course, you don’t need files created by your editor. To ignore those files you need to create &lt;strong&gt;".helmignore"&lt;/strong&gt; and add the files you want to ignore in your Helm. For example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;.git/
.gitignore
&lt;span class="k"&gt;*&lt;/span&gt;.swp
&lt;span class="k"&gt;*&lt;/span&gt;.bak
&lt;span class="k"&gt;*&lt;/span&gt;.tmp
.idea/
.vscode/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We added files related to git, Vscode, and temporary files as you see above. This way we make sure our unnecessary files won’t be added to our Helm deployment.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Run the App
&lt;/h2&gt;

&lt;p&gt;Now everything is ready, we can deploy our Helm chart. Deploy the chart with the following command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; python-app python-app&lt;span class="se"&gt;\&lt;/span&gt;
     python-app/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Access the App
&lt;/h2&gt;

&lt;p&gt;We configured our Ingress object with the hostname "my.example.com", therefore we need to access it via this domain. To do that we add this line to our &lt;code&gt;/etc/hosts&lt;/code&gt; file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;192.168.20.2 my.example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now if we curl the domain we should get the result:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="s2"&gt;"http://my.example.com/execute?command=touch&amp;amp;argument=/file1"&lt;/span&gt;
The &lt;span class="nb"&gt;command &lt;/span&gt;executed is: &lt;span class="nb"&gt;touch&lt;/span&gt; /file1
The &lt;span class="nb"&gt;exit &lt;/span&gt;status code is: 0
Machine-id: python-5f949b9dc8-9wvhh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If we do it again we see the machine-id value is changed.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="s2"&gt;"http://my.example.com/execute?command=touch&amp;amp;argument=/file1"&lt;/span&gt;
The &lt;span class="nb"&gt;command &lt;/span&gt;executed is: &lt;span class="nb"&gt;touch&lt;/span&gt; /file1
The &lt;span class="nb"&gt;exit &lt;/span&gt;status code is: 0
Machine-id: python-5f949b9dc8-ljpqk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And if we exec into our containers and check the hostnames we get the following:&lt;/p&gt;

&lt;p&gt;Container 1:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4jduxev6qa452jgkate7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4jduxev6qa452jgkate7.png" alt="Container 1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Container 2:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwx5tqa2mq0vt9y5q0h7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwx5tqa2mq0vt9y5q0h7j.png" alt="Container 2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The reason is we have two replicas of our app in a pod. So each time gets routed to a different container. That’s it. Our Helm chart is deployed and our application is running as expected.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;So far we created a chart, deployed it on a server, and got a result. What we did for this Helm chart was naive and the simplest. There are lots more we could add to our chart like adding dependencies to inject other charts to our chart, notes to add guidelines about the application, and other Kubernetes objects, and etc. As we will learn more, we write perfect, complex charts and enhance them in the future. So keep learning&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;I hope you find this article useful and please leave a comment or contact me via my page.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;For more information about Helm please visit: &lt;a href="https://helm.sh/docs/" rel="noopener noreferrer"&gt;https://helm.sh/docs/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>helm</category>
    </item>
  </channel>
</rss>
