<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Karim</title>
    <description>The latest articles on Forem by Karim (@deep75).</description>
    <link>https://forem.com/deep75</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/deep75"/>
    <language>en</language>
    <item>
      <title>AIOps : Investigation par l’IA dans Kubernetes avec HolmesGPT, Ollama et RunPod …</title>
      <dc:creator>Karim</dc:creator>
      <pubDate>Wed, 15 Jan 2025 00:52:40 +0000</pubDate>
      <link>https://forem.com/deep75/aiops-investigation-par-lia-dans-kubernetes-avec-holmesgpt-ollama-et-runpod--3gk0</link>
      <guid>https://forem.com/deep75/aiops-investigation-par-lia-dans-kubernetes-avec-holmesgpt-ollama-et-runpod--3gk0</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktvh7jmd9vxk1y0v7c0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktvh7jmd9vxk1y0v7c0s.png" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dans le monde de l’orchestration de conteneurs, Kubernetes est devenu une norme pour gérer les workloads conteneurisés. Cependant, la gestion et le dépannage des clusters Kubernetes peuvent être complexes et chronophages. Cet article explore comment l’intelligence artificielle (IA) peut être intégrée dans Kubernetes pour améliorer l’investigation et la gestion des incidents. J’avais d’ailleurs évoqué le sujet dans un article précédent :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/deep75/aiops-deboguer-son-cluster-kubernetes-en-utilisant-lintelligence-artificielle-generative-via-52hc"&gt;AIOps : Déboguer son cluster Kubernetes en utilisant l’intelligence artificielle générative via…&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ici je vais m’intéresser à HolmesGPT. HolmesGPT, développé par Robusta, est un agent de dépannage open source qui utilise l’IA pour investiguer les incidents dans les clusters Kubernetes avec ces caractéristiques :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Intégration avec les outils de gestion d’incidents&lt;/strong&gt;  : HolmesGPT se connecte à des outils comme PagerDuty, OpsGenie et Prometheus pour collecter des données et analyser les alertes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Investigation automatisée&lt;/strong&gt;  : Grâce à l’IA, HolmesGPT peut identifier et résoudre des problèmes tels que l’expiration des certificats SSL, les problèmes de ressources insuffisantes et les problèmes d’affinité des nœuds. Cela réduit significativement le temps et l’effort nécessaires pour le dépannage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Personnalisation&lt;/strong&gt;  : HolmesGPT permet de créer des livres de recettes (runbooks) personnalisés pour gérer des problèmes spécifiques, en utilisant des API et des outils personnalisés si nécessaire.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/robusta-dev/holmesgpt" rel="noopener noreferrer"&gt;GitHub - robusta-dev/holmesgpt: On-Call Assistant for Prometheus Alerts - Get a head start on fixing alerts with AI investigation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pour cet exercice, je vais d’abord lancer une instance Ubuntu 24.04 LTS de nouveau chez le fournisseur Cloud DigitalOcean :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfo9qz3j78pfuyr0i36m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfo9qz3j78pfuyr0i36m.png" width="800" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Je vais y installer Incus, un fork de LXD qui va me servir de base pour la formation d’un cluster Kubernetes avec plusieurs containers :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://linuxcontainers.org/incus/" rel="noopener noreferrer"&gt;Linux Containers - Incus - Introduction&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg01os4x00wqka5ivqa33.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg01os4x00wqka5ivqa33.jpg" width="524" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Comme pour LXD, je vais procéder à la création de plusieurs profiles. Mais dans un premier temps, installation d’Incus sur l’instance :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@k0s-incus:~# curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://pkgs.zabbly.com/key.asc | gpg &lt;span class="nt"&gt;--show-keys&lt;/span&gt; &lt;span class="nt"&gt;--fingerprint&lt;/span&gt;
gpg: directory &lt;span class="s1"&gt;'/root/.gnupg'&lt;/span&gt; created
gpg: keybox &lt;span class="s1"&gt;'/root/.gnupg/pubring.kbx'&lt;/span&gt; created
pub rsa3072 2023-08-23 &lt;span class="o"&gt;[&lt;/span&gt;SC] &lt;span class="o"&gt;[&lt;/span&gt;expires: 2025-08-22]
      4EFC 5906 96CB 15B8 7C73 A3AD 82CC 8797 C838 DCFD
uid Zabbly Kernel Builds &amp;lt;info@zabbly.com&amp;gt;
sub rsa3072 2023-08-23 &lt;span class="o"&gt;[&lt;/span&gt;E] &lt;span class="o"&gt;[&lt;/span&gt;expires: 2025-08-22]

root@k0s-incus:~# &lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/apt/keyrings/

root@k0s-incus:~# curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://pkgs.zabbly.com/key.asc &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/zabbly.asc

root@k0s-incus:~# sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release &amp;amp;&amp;amp; echo ${VERSION_CODENAME})
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.asc

EOF'&lt;/span&gt;

root@k0s-incus:~# apt-get update

Hit:1 http://security.ubuntu.com/ubuntu noble-security InRelease
Hit:2 http://mirrors.digitalocean.com/ubuntu noble InRelease
Hit:3 https://repos-droplet.digitalocean.com/apt/droplet-agent main InRelease
Hit:4 http://mirrors.digitalocean.com/ubuntu noble-updates InRelease
Hit:5 http://mirrors.digitalocean.com/ubuntu noble-backports InRelease
Get:6 https://pkgs.zabbly.com/incus/stable noble InRelease &lt;span class="o"&gt;[&lt;/span&gt;7358 B]   
Get:7 https://pkgs.zabbly.com/incus/stable noble/main amd64 Packages &lt;span class="o"&gt;[&lt;/span&gt;3542 B]
Fetched 10.9 kB &lt;span class="k"&gt;in &lt;/span&gt;1s &lt;span class="o"&gt;(&lt;/span&gt;13.3 kB/s&lt;span class="o"&gt;)&lt;/span&gt;   
Reading package lists... Done

root@k0s-incus:~# apt-get &lt;span class="nb"&gt;install &lt;/span&gt;incus incus-client incus-ui-canonical &lt;span class="nt"&gt;-y&lt;/span&gt;
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  attr dconf-gsettings-backend dconf-service dns-root-data dnsmasq-base fontconfig genisoimage glib-networking glib-networking-common glib-networking-services gsettings-desktop-schemas
  gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-x incus-base iw libaa1 libasyncns0 libavc1394-0 libboost-iostreams1.83.0 libboost-thread1.83.0 libbtrfs0t64 libcaca0
  libcairo-gobject2 libcairo2 libcdparanoia0 libdatrie1 libdaxctl1 libdconf1 libdv4t64 libflac12t64 libgdk-pixbuf-2.0-0 libgdk-pixbuf2.0-bin libgdk-pixbuf2.0-common libgraphite2-3
  libgstreamer-plugins-base1.0-0 libgstreamer-plugins-good1.0-0 libharfbuzz0b libiec61883-0 libmp3lame0 libmpg123-0t64 libndctl6 libnet1 libogg0 libopus0 liborc-0.4-0t64 libpango-1.0-0
  libpangocairo-1.0-0 libpangoft2-1.0-0 libpixman-1-0 libpmem1 libpmemobj1 libproxy1v5 libpulse0 librados2 libraw1394-11 librbd1 librdmacm1t64 libshout3 libsndfile1 libsoup-3.0-0
  libsoup-3.0-common libspeex1 libspice-server1 libtag1v5 libtag1v5-vanilla libthai-data libthai0 libtheora0 libtwolame0 libusbredirparser1t64 libv4l-0t64 libv4lconvert0t64 libvisual-0.4-0
  libvorbis0a libvorbisenc2 libvpx9 libwavpack1 libx11-xcb1 libxcb-render0 libxcb-shm0 libxdamage1 libxfixes3 libxi6 libxrender1 libxtst6 libxv1 session-migration sshfs wireless-regdb
  x11-common xdelta3

root@k0s-incus:~# incus
Description:
  Command line client &lt;span class="k"&gt;for &lt;/span&gt;Incus

  All of Incus&lt;span class="s1"&gt;'s features can be driven through the various commands below.
  For help with any of those, simply call them with --help.

  Custom commands can be defined through aliases, use "incus alias" to control those.

Usage:
  incus [command]

Available Commands:
  admin Manage incus daemon
  cluster Manage cluster members
  config Manage instance and server configuration options
  console Attach to instance consoles
  copy Copy instances within or in between servers
  create Create instances from images
  delete Delete instances
  exec Execute commands in instances
  export Export instance backups
  file Manage files in instances
  help Help about any command
  image Manage images
  import Import instance backups
  info Show instance or server information
  launch Create and start instances from images
  list List instances
  move Move instances within or in between servers
  network Manage and attach instances to networks
  pause Pause instances
  profile Manage profiles
  project Manage projects
  publish Publish instances as images
  rebuild Rebuild instances
  remote Manage the list of remote servers
  rename Rename instances
  restart Restart instances
  resume Resume instances
  snapshot Manage instance snapshots
  start Start instances
  stop Stop instances
  storage Manage storage pools and volumes
  top Display resource usage info per instance
  version Show local and remote versions
  webui Open the web interface

Flags:
      --all Show less common commands
      --debug Show all debug messages
      --force-local Force using the local unix socket
  -h, --help Print help
      --project Override the source project
  -q, --quiet Don'&lt;/span&gt;t show progress information
      &lt;span class="nt"&gt;--sub-commands&lt;/span&gt; Use with &lt;span class="nb"&gt;help &lt;/span&gt;or &lt;span class="nt"&gt;--help&lt;/span&gt; to view sub-commands
  &lt;span class="nt"&gt;-v&lt;/span&gt;, &lt;span class="nt"&gt;--verbose&lt;/span&gt; Show all information messages
      &lt;span class="nt"&gt;--version&lt;/span&gt; Print version number

Use &lt;span class="s2"&gt;"incus [command] --help"&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;more information about a command.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initialisation d’Incus en version minimaliste :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@k0s-incus:~# incus admin init
Would you like to use clustering? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;no]: 
Do you want to configure a new storage pool? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;: 
Name of the new storage pool &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;default]: 
Name of the storage backend to use &lt;span class="o"&gt;(&lt;/span&gt;btrfs, &lt;span class="nb"&gt;dir&lt;/span&gt;, lvm&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;btrfs]: &lt;span class="nb"&gt;dir
&lt;/span&gt;Where should this storage pool store its data? &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/lib/incus/storage-pools/default]: 
Would you like to create a new &lt;span class="nb"&gt;local &lt;/span&gt;network bridge? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;: 
What should the new bridge be called? &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;incusbr0]: 
What IPv4 address should be used? &lt;span class="o"&gt;(&lt;/span&gt;CIDR subnet notation, “auto” or “none”&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;auto]: 
What IPv6 address should be used? &lt;span class="o"&gt;(&lt;/span&gt;CIDR subnet notation, “auto” or “none”&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;auto]: 
Would you like the server to be available over the network? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;no]:    
Would you like stale cached images to be updated automatically? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;: 
Would you like a YAML &lt;span class="s2"&gt;"init"&lt;/span&gt; preseed to be printed? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;no]: 

root@k0s-incus:~# incus list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

root@k0s-incus:~# incus profile list
+---------+-----------------------+---------+
| NAME | DESCRIPTION | USED BY |
+---------+-----------------------+---------+
| default | Default Incus profile | 0 |
+---------+-----------------------+---------+

root@k0s-incus:~# incus profile show default
config: &lt;span class="o"&gt;{}&lt;/span&gt;
description: Default Incus profile
devices:
  eth0:
    name: eth0
    network: incusbr0
    &lt;span class="nb"&gt;type&lt;/span&gt;: nic
  root:
    path: /
    pool: default
    &lt;span class="nb"&gt;type&lt;/span&gt;: disk
name: default
used_by: &lt;span class="o"&gt;[]&lt;/span&gt;
project: default

root@k0s-incus:~# incus profile create k8s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Incus dispose d’un tableau de bord de contrôle qui peut être actionné temporairement par incus webui.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://linuxcontainers.org/incus/docs/main/reference/manpages/incus/webui/" rel="noopener noreferrer"&gt;incus webui&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Activation de ce dernier :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@k0s-incus:~# &lt;span class="nb"&gt;nohup &lt;/span&gt;incus webui &amp;amp;
&lt;span class="o"&gt;[&lt;/span&gt;1] 4104
root@k0s-incus:~# &lt;span class="nb"&gt;nohup&lt;/span&gt;: ignoring input and appending output to &lt;span class="s1"&gt;'nohup.out'&lt;/span&gt;

root@k0s-incus:~# &lt;span class="nb"&gt;cat &lt;/span&gt;nohup.out 
Web server running at: http://127.0.0.1:34363/ui?auth_token&lt;span class="o"&gt;=&lt;/span&gt;3c5f5d4b-f9ed-4bf9-a174-d5ea2366cfbf

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Utilisation de &lt;strong&gt;pinggy.io&lt;/strong&gt; pour y accéder :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pinggy.io/" rel="noopener noreferrer"&gt;Pinggy - Simple Localhost Tunnels&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiuelyjmbgcvansbv8v5h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiuelyjmbgcvansbv8v5h.png" width="800" height="382"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@k0s-incus:~# ssh &lt;span class="nt"&gt;-p&lt;/span&gt; 443 &lt;span class="nt"&gt;-R0&lt;/span&gt;:127.0.0.1:34363 a.pinggy.io

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fackh7m9fga92mclzbu87.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fackh7m9fga92mclzbu87.png" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffgydlx99dflogw0d4obp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffgydlx99dflogw0d4obp.png" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Je récupère le même profile qu’utilise LXD pour MicroK8s :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://microk8s.io/docs/install-lxd" rel="noopener noreferrer"&gt;MicroK8s - MicroK8s in LXD | MicroK8s&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;roothttps://microk8s.io/docs/install-lxd@k0s-incus:~# wget https://raw.githubusercontent.com/ubuntu/microk8s/master/tests/lxc/microk8s.profile &lt;span class="nt"&gt;-O&lt;/span&gt; k8s.profile
&lt;span class="nt"&gt;--2025-01-14&lt;/span&gt; 20:58:42-- https://raw.githubusercontent.com/ubuntu/microk8s/master/tests/lxc/microk8s.profile
Resolving raw.githubusercontent.com &lt;span class="o"&gt;(&lt;/span&gt;raw.githubusercontent.com&lt;span class="o"&gt;)&lt;/span&gt;... 185.199.108.133, 185.199.110.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com &lt;span class="o"&gt;(&lt;/span&gt;raw.githubusercontent.com&lt;span class="o"&gt;)&lt;/span&gt;|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 816 &lt;span class="o"&gt;[&lt;/span&gt;text/plain]
Saving to: ‘k8s.profile’

k8s.profile 100%[&lt;span class="o"&gt;=====================================================================================================&amp;gt;]&lt;/span&gt; 816 &lt;span class="nt"&gt;--&lt;/span&gt;.-KB/s &lt;span class="k"&gt;in &lt;/span&gt;0s      

2025-01-14 20:58:42 &lt;span class="o"&gt;(&lt;/span&gt;33.4 MB/s&lt;span class="o"&gt;)&lt;/span&gt; - ‘k8s.profile’ saved &lt;span class="o"&gt;[&lt;/span&gt;816/816]

root@k0s-incus:~# &lt;span class="nb"&gt;cat &lt;/span&gt;k8s.profile | incus profile edit k8s
root@k0s-incus:~# &lt;span class="nb"&gt;rm &lt;/span&gt;k8s.profile

root@k0s-incus:~# incus profile show k8s
config:
  boot.autostart: &lt;span class="s2"&gt;"true"&lt;/span&gt;
  linux.kernel_modules: ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,netlink_diag,nf_nat,overlay,br_netfilter
  raw.lxc: |
    lxc.apparmor.profile&lt;span class="o"&gt;=&lt;/span&gt;unconfined
    lxc.mount.auto&lt;span class="o"&gt;=&lt;/span&gt;proc:rw sys:rw cgroup:rw
    lxc.cgroup.devices.allow&lt;span class="o"&gt;=&lt;/span&gt;a
    lxc.cap.drop&lt;span class="o"&gt;=&lt;/span&gt;
  security.nesting: &lt;span class="s2"&gt;"true"&lt;/span&gt;
  security.privileged: &lt;span class="s2"&gt;"true"&lt;/span&gt;
description: &lt;span class="s2"&gt;""&lt;/span&gt;
devices:
  aadisable:
    path: /sys/module/nf_conntrack/parameters/hashsize
    &lt;span class="nb"&gt;source&lt;/span&gt;: /sys/module/nf_conntrack/parameters/hashsize
    &lt;span class="nb"&gt;type&lt;/span&gt;: disk
  aadisable2:
    path: /dev/kmsg
    &lt;span class="nb"&gt;source&lt;/span&gt;: /dev/kmsg
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-char
  aadisable3:
    path: /sys/fs/bpf
    &lt;span class="nb"&gt;source&lt;/span&gt;: /sys/fs/bpf
    &lt;span class="nb"&gt;type&lt;/span&gt;: disk
  aadisable4:
    path: /proc/sys/net/netfilter/nf_conntrack_max
    &lt;span class="nb"&gt;source&lt;/span&gt;: /proc/sys/net/netfilter/nf_conntrack_max
    &lt;span class="nb"&gt;type&lt;/span&gt;: disk
name: k8s
used_by: &lt;span class="o"&gt;[]&lt;/span&gt;
project: default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok2y4u94bb6i5io6zp8d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok2y4u94bb6i5io6zp8d.png" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxavr929bwhv7rncqjocj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxavr929bwhv7rncqjocj.png" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Comme Incus a la faculté d’utiliser cloud-init, je crée un nouveau profile destiné à cet usage :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;root@k0s-incus:~# incus profile show cloud&lt;/span&gt;
&lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;cloud-init.user-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;#cloud-config&lt;/span&gt;
    &lt;span class="s"&gt;package_update: true&lt;/span&gt;
    &lt;span class="s"&gt;package_upgrade: true&lt;/span&gt;
    &lt;span class="s"&gt;package_reboot_if_required: true&lt;/span&gt;
    &lt;span class="s"&gt;packages:&lt;/span&gt;
      &lt;span class="s"&gt;- vim&lt;/span&gt;
      &lt;span class="s"&gt;- wget&lt;/span&gt;
      &lt;span class="s"&gt;- git&lt;/span&gt;
      &lt;span class="s"&gt;- curl&lt;/span&gt;
      &lt;span class="s"&gt;- htop&lt;/span&gt;
      &lt;span class="s"&gt;- openssh-server&lt;/span&gt;
    &lt;span class="s"&gt;bootcmd:&lt;/span&gt;
      &lt;span class="s"&gt;- systemctl enable ssh&lt;/span&gt;
      &lt;span class="s"&gt;- systemctl start ssh&lt;/span&gt;
    &lt;span class="s"&gt;ssh_authorized_keys:&lt;/span&gt;
      &lt;span class="s"&gt;- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCpbsaaVUMa2TM9q8VkeBmbKvJpbreXTcqI5F5N3riGsoZ7Z/IIN7eR6J47UP2bj3IBTdgHmij1uOexm60QBO2PY4abIhsN+xnVS4a0LSyI8v6nYECWbEehL/gFn6uDmSLA4m0hZCF5BSpLxQYzKS28dHIdXsLC4CDd67nAXIhOiVpM0q/AUCuSy+mA0VwFa/JAkFCk8TpQBorgwJIq635imrgxYIpEUA2wHXOhw23mO3zTUlay13LSlA2a1xyTkP8hSDWdRYVxr2DEB/MtmTX2BdWlA5rDRmzXE7R2/csE245WAxG+XfSu4zNqhHzm8Df3zmZn3/UyKLcx4eJF//mVZyrM7RQHRteA/im8I4IavrReGyCUKY+OsSfygYVFyO87rYQ+IOauOnB4LxBohBjSBN3Skk4X7krYFIi8D9R1lmL+VvBfpvy0YMurOahY1VJFzD0dUeK2bDUdeWzfFkcX039d9/RRXRxieNpxwp1BLPi5/DXG8FihzgwVTf6h60J9/fkYzY+BO8CKG2kYTUsy1ykuXLzLY5sTCREiEoEKcJ9IGz8OimZ1AmkgJJCrQnI6mT/KiNDU6YCc75ONKTKX5HKVPhZWT255Aw4f5LBbBrj06cJX3GuunV0I30+BYyHwLbPBoqgd4GUk3YJlr8wS3qre/YUSc2iKNDTOzFCC8Q== root@k0s-incus&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;incus with cloud-init&lt;/span&gt;
&lt;span class="na"&gt;devices&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cloud&lt;/span&gt;
&lt;span class="na"&gt;used_by&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
&lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwurwmdyk16jinnag560n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwurwmdyk16jinnag560n.png" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Je suis prêt pour la création de trois containers qui me serviront de pivot à la création d’un cluster Kubernetes :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@k0s-incus:~# &lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;1..3&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;incus launch &lt;span class="nt"&gt;-p&lt;/span&gt; default &lt;span class="nt"&gt;-p&lt;/span&gt; k8s &lt;span class="nt"&gt;-p&lt;/span&gt; cloud images:ubuntu/24.04/cloud k0s-&lt;span class="nv"&gt;$i&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done
&lt;/span&gt;Launching k0s-1
Launching k0s-2                                    
Launching k0s-3

root@k0s-incus:~# incus list                       
+-------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| k0s-1 | RUNNING | 10.224.160.99 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | fd42:4641:b619:c782:216:3eff:fea4:53d3 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | CONTAINER | 0 |
+-------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| k0s-2 | RUNNING | 10.224.160.54 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | fd42:4641:b619:c782:216:3eff:feee:7af8 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | CONTAINER | 0 |
+-------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| k0s-3 | RUNNING | 10.224.160.215 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | fd42:4641:b619:c782:216:3eff:fef3:709b &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | CONTAINER | 0 |
+-------+---------+-----------------------+-----------------------------------------------+-----------+-----------+

root@k0s-incus:~# &lt;span class="nb"&gt;cat&lt;/span&gt; .ssh/config 
Host &lt;span class="k"&gt;*&lt;/span&gt;
   StrictHostKeyChecking no
   &lt;span class="nv"&gt;UserKnownHostsFile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/dev/null

root@k0s-incus:~# ssh ubuntu@10.224.160.99

Welcome to Ubuntu 24.04.1 LTS &lt;span class="o"&gt;(&lt;/span&gt;GNU/Linux 6.8.0-51-generic x86_64&lt;span class="o"&gt;)&lt;/span&gt;

 &lt;span class="k"&gt;*&lt;/span&gt; Documentation: https://help.ubuntu.com
 &lt;span class="k"&gt;*&lt;/span&gt; Management: https://landscape.canonical.com
 &lt;span class="k"&gt;*&lt;/span&gt; Support: https://ubuntu.com/pro

The programs included with the Ubuntu system are free software&lt;span class="p"&gt;;&lt;/span&gt;
the exact distribution terms &lt;span class="k"&gt;for &lt;/span&gt;each program are described &lt;span class="k"&gt;in &lt;/span&gt;the
individual files &lt;span class="k"&gt;in&lt;/span&gt; /usr/share/doc/&lt;span class="k"&gt;*&lt;/span&gt;/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a &lt;span class="nb"&gt;command &lt;/span&gt;as administrator &lt;span class="o"&gt;(&lt;/span&gt;user &lt;span class="s2"&gt;"root"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;, use &lt;span class="s2"&gt;"sudo &amp;lt;command&amp;gt;"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
See &lt;span class="s2"&gt;"man sudo_root"&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;details.

ubuntu@k0s-1:~&lt;span class="err"&gt;$&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Récupération de k0sctl pour la création d’un cluster Kubernetes avec k0s :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.k0sproject.io/head/k0sctl-install/" rel="noopener noreferrer"&gt;Using k0sctl - Documentation&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@k0s-incus:~# wget &lt;span class="nt"&gt;-c&lt;/span&gt; https://github.com/k0sproject/k0sctl/releases/download/v0.21.0/k0sctl-linux-amd64 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chmod&lt;/span&gt; +x k0sctl-linux-amd64 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;mv &lt;/span&gt;k0sctl-linux-amd64 /usr/local/bin/k0sctl

Saving to: ‘k0sctl-linux-amd64’

k0sctl-linux-amd64 100%[&lt;span class="o"&gt;=====================================================================================================&amp;gt;]&lt;/span&gt; 18.21M &lt;span class="nt"&gt;--&lt;/span&gt;.-KB/s &lt;span class="k"&gt;in &lt;/span&gt;0.1s    

2025-01-14 21:22:23 &lt;span class="o"&gt;(&lt;/span&gt;122 MB/s&lt;span class="o"&gt;)&lt;/span&gt; - ‘k0sctl-linux-amd64’ saved &lt;span class="o"&gt;[&lt;/span&gt;19091608/19091608]

root@k0s-incus:~# k0sctl 
NAME:
   k0sctl - k0s cluster management tool

USAGE:
   k0sctl &lt;span class="o"&gt;[&lt;/span&gt;global options] &lt;span class="nb"&gt;command&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;command &lt;/span&gt;options]

COMMANDS:
   version Output k0sctl version
   apply Apply a k0sctl configuration
   kubeconfig Output the admin kubeconfig of the cluster
   init Create a configuration template
   reset Remove traces of k0s from all of the hosts
   backup Take backup of existing clusters state
   config Configuration related sub-commands
   completion  
   &lt;span class="nb"&gt;help&lt;/span&gt;, h Shows a list of commands or &lt;span class="nb"&gt;help &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;one &lt;span class="nb"&gt;command

&lt;/span&gt;GLOBAL OPTIONS:
   &lt;span class="nt"&gt;--debug&lt;/span&gt;, &lt;span class="nt"&gt;-d&lt;/span&gt; Enable debug logging &lt;span class="o"&gt;(&lt;/span&gt;default: &lt;span class="nb"&gt;false&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;$DEBUG&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
   &lt;span class="nt"&gt;--trace&lt;/span&gt; Enable trace logging &lt;span class="o"&gt;(&lt;/span&gt;default: &lt;span class="nb"&gt;false&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;$TRACE&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
   &lt;span class="nt"&gt;--no-redact&lt;/span&gt; Do not hide sensitive information &lt;span class="k"&gt;in &lt;/span&gt;the output &lt;span class="o"&gt;(&lt;/span&gt;default: &lt;span class="nb"&gt;false&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
   &lt;span class="nt"&gt;--help&lt;/span&gt;, &lt;span class="nt"&gt;-h&lt;/span&gt; show &lt;span class="nb"&gt;help

&lt;/span&gt;root@k0s-incus:~# k0sctl init &lt;span class="nt"&gt;--k0s&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; k0sctl.yaml

root@k0s-incus:~# &lt;span class="nb"&gt;cat &lt;/span&gt;k0sctl.yaml 
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
  user: admin
spec:
  hosts:
  - ssh:
      address: 10.224.160.99 
      user: ubuntu
      port: 22
      keyPath: /root/.ssh/id_rsa
    role: controller
  - ssh:
      address: 10.224.160.54
      user: ubuntu
      port: 22
      keyPath: /root/.ssh/id_rsa
    role: worker
  - ssh:
      address: 10.224.160.215
      user: ubuntu
      port: 22
      keyPath: /root/.ssh/id_rsa
    role: worker
  k0s:
    config:
      apiVersion: k0s.k0sproject.io/v1beta1
      kind: Cluster
      metadata:
        name: k0s
      spec:
        api:
          k0sApiPort: 9443
          port: 6443
        installConfig:
          &lt;span class="nb"&gt;users&lt;/span&gt;:
            etcdUser: etcd
            kineUser: kube-apiserver
            konnectivityUser: konnectivity-server
            kubeAPIserverUser: kube-apiserver
            kubeSchedulerUser: kube-scheduler
        konnectivity:
          adminPort: 8133
          agentPort: 8132
        network:
          kubeProxy:
            disabled: &lt;span class="nb"&gt;false
            &lt;/span&gt;mode: iptables
          kuberouter:
            autoMTU: &lt;span class="nb"&gt;true
            &lt;/span&gt;mtu: 0
            peerRouterASNs: &lt;span class="s2"&gt;""&lt;/span&gt;
            peerRouterIPs: &lt;span class="s2"&gt;""&lt;/span&gt;
          podCIDR: 10.244.0.0/16
          provider: kuberouter
          serviceCIDR: 10.96.0.0/12
        podSecurityPolicy:
          defaultPolicy: 00-k0s-privileged
        storage:
          &lt;span class="nb"&gt;type&lt;/span&gt;: etcd
        telemetry:
          enabled: &lt;span class="nb"&gt;true&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lancement de la création :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
root@k0s-incus:~# k0sctl apply &lt;span class="nt"&gt;--config&lt;/span&gt; k0sctl.yaml 

⠀⣿⣿⡇⠀⠀⢀⣴⣾⣿⠟⠁⢸⣿⣿⣿⣿⣿⣿⣿⡿⠛⠁⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀█████████ █████████ ███
⠀⣿⣿⡇⣠⣶⣿⡿⠋⠀⠀⠀⢸⣿⡇⠀⠀⠀⣠⠀⠀⢀⣠⡆⢸⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀███ ███ ███
⠀⣿⣿⣿⣿⣟⠋⠀⠀⠀⠀⠀⢸⣿⡇⠀⢰⣾⣿⠀⠀⣿⣿⡇⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀███ ███ ███
⠀⣿⣿⡏⠻⣿⣷⣤⡀⠀⠀⠀⠸⠛⠁⠀⠸⠋⠁⠀⠀⣿⣿⡇⠈⠉⠉⠉⠉⠉⠉⠉⠉⢹⣿⣿⠀███ ███ ███
⠀⣿⣿⡇⠀⠀⠙⢿⣿⣦⣀⠀⠀⠀⣠⣶⣶⣶⣶⣶⣶⣿⣿⡇⢰⣶⣶⣶⣶⣶⣶⣶⣶⣾⣿⣿⠀█████████ ███ ██████████
k0sctl v0.21.0 Copyright 2023, k0sctl authors.
By continuing to use k0sctl you agree to these terms:
https://k0sproject.io/licenses/eula
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Set k0s version  
INFO Looking up latest stable k0s version         
INFO Using k0s version v1.31.3+k0s.0              
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Connect to hosts 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.215:22: connected           
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.99:22: connected            
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.54:22: connected            
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Detect host operating systems 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.215:22: is running Ubuntu 24.04.1 LTS 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.99:22: is running Ubuntu 24.04.1 LTS 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.54:22: is running Ubuntu 24.04.1 LTS 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Acquire exclusive host lock 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Prepare hosts    
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Gather host facts 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.215:22: using k0s-3 as &lt;span class="nb"&gt;hostname 
&lt;/span&gt;INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.54:22: using k0s-2 as &lt;span class="nb"&gt;hostname 
&lt;/span&gt;INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.99:22: using k0s-1 as &lt;span class="nb"&gt;hostname 
&lt;/span&gt;INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.215:22: discovered eth0 as private interface 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.54:22: discovered eth0 as private interface 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.99:22: discovered eth0 as private interface 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Validate hosts   
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Validate facts   
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Download k0s on hosts 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.215:22: downloading k0s v1.31.3+k0s.0 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.54:22: downloading k0s v1.31.3+k0s.0 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.99:22: downloading k0s v1.31.3+k0s.0 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Install k0s binaries on hosts 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.99:22: validating configuration 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Configure k0s    
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.99:22: installing new configuration 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Initialize the k0s cluster 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.99:22: installing k0s controller 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.99:22: waiting &lt;span class="k"&gt;for &lt;/span&gt;the k0s service to start 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.99:22: &lt;span class="nb"&gt;wait &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;kubernetes to reach ready state 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Install workers  
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.99:22: generating a &lt;span class="nb"&gt;join &lt;/span&gt;token &lt;span class="k"&gt;for &lt;/span&gt;worker 1 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.99:22: generating a &lt;span class="nb"&gt;join &lt;/span&gt;token &lt;span class="k"&gt;for &lt;/span&gt;worker 2 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.215:22: validating api connection to https://10.224.160.99:6443 using &lt;span class="nb"&gt;join &lt;/span&gt;token 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.54:22: validating api connection to https://10.224.160.99:6443 using &lt;span class="nb"&gt;join &lt;/span&gt;token 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.215:22: writing &lt;span class="nb"&gt;join &lt;/span&gt;token to /etc/k0s/k0stoken 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.54:22: writing &lt;span class="nb"&gt;join &lt;/span&gt;token to /etc/k0s/k0stoken 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.54:22: installing k0s worker 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.215:22: installing k0s worker 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.215:22: starting service    
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.215:22: waiting &lt;span class="k"&gt;for &lt;/span&gt;node to become ready 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.54:22: starting service     
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 10.224.160.54:22: waiting &lt;span class="k"&gt;for &lt;/span&gt;node to become ready 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Release exclusive host lock 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Disconnect from hosts 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Finished &lt;span class="k"&gt;in &lt;/span&gt;42s                 
INFO k0s cluster version v1.31.3+k0s.0 is now installed 
INFO Tip: To access the cluster you can now fetch the admin kubeconfig using: 
INFO k0sctl kubeconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Le cluster est actif :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxbhz6ym44x56e7i36os.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxbhz6ym44x56e7i36os.png" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzwauyg2lwj6fu9qpiqc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzwauyg2lwj6fu9qpiqc.png" width="800" height="425"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@k0s-incus:~# curl &lt;span class="nt"&gt;-LO&lt;/span&gt; https://dl.k8s.io/release/v1.31.3/bin/linux/amd64/kubectl &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chmod&lt;/span&gt; +x kubectl &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;mv &lt;/span&gt;kubectl /usr/local/bin/
  % Total % Received % Xferd Average Speed Time Time Time Current
                                 Dload Upload Total Spent Left Speed
100 138 100 138 0 0 923 0 &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- 926
100 53.7M 100 53.7M 0 0 476k 0 0:01:55 0:01:55 &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- 1023k

root@k0s-incus:~# &lt;span class="nb"&gt;mkdir&lt;/span&gt; .kube
root@k0s-incus:~# k0sctl kubeconfig &lt;span class="nt"&gt;--config&lt;/span&gt; k0sctl.yaml &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; .kube/config

root@k0s-incus:~# kubectl cluster-info
Kubernetes control plane is running at https://10.224.160.99:6443
CoreDNS is running at https://10.224.160.99:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use &lt;span class="s1"&gt;'kubectl cluster-info dump'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;

root@k0s-incus:~# kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k0s-2 Ready &amp;lt;none&amp;gt; 5m1s v1.31.3+k0s 10.224.160.54 &amp;lt;none&amp;gt; Ubuntu 24.04.1 LTS 6.8.0-51-generic containerd://1.7.24
k0s-3 Ready &amp;lt;none&amp;gt; 5m1s v1.31.3+k0s 10.224.160.215 &amp;lt;none&amp;gt; Ubuntu 24.04.1 LTS 6.8.0-51-generic containerd://1.7.24

root@k0s-incus:~# kubectl get po,svc &lt;span class="nt"&gt;-A&lt;/span&gt;
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-645c5d6f5b-kgnsf 1/1 Running 0 5m2s
kube-system pod/coredns-645c5d6f5b-n2rbk 1/1 Running 0 5m2s
kube-system pod/konnectivity-agent-2dg8l 1/1 Running 0 5m4s
kube-system pod/konnectivity-agent-5l5dl 1/1 Running 0 5m4s
kube-system pod/kube-proxy-cx47n 1/1 Running 0 5m7s
kube-system pod/kube-proxy-sp5fd 1/1 Running 0 5m7s
kube-system pod/kube-router-6l4qv 1/1 Running 0 5m7s
kube-system pod/kube-router-b9t89 1/1 Running 0 5m7s
kube-system pod/metrics-server-78c4ccbc7f-jxpzz 1/1 Running 0 5m1s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
default service/kubernetes ClusterIP 10.96.0.1 &amp;lt;none&amp;gt; 443/TCP 5m17s
kube-system service/kube-dns ClusterIP 10.96.0.10 &amp;lt;none&amp;gt; 53/UDP,53/TCP,9153/TCP 5m7s
kube-system service/metrics-server ClusterIP 10.109.44.51 &amp;lt;none&amp;gt; 443/TCP 5m1s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Il est alors possible de procéder à l’installation d’HolmesGPT via Pipx :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/robusta-dev/holmesgpt?tab=readme-ov-file#installation" rel="noopener noreferrer"&gt;GitHub - robusta-dev/holmesgpt: On-Call Assistant for Prometheus Alerts - Get a head start on fixing alerts with AI investigation&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@k0s-incus:~# apt &lt;span class="nb"&gt;install &lt;/span&gt;pipx &lt;span class="nt"&gt;-y&lt;/span&gt;

root@k0s-incus:~# pipx ensurepath

Success! Added /root/.local/bin to the PATH environment variable.

Consider adding shell completions &lt;span class="k"&gt;for &lt;/span&gt;pipx. Run &lt;span class="s1"&gt;'pipx completions'&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;instructions.

You will need to open a new terminal or re-login &lt;span class="k"&gt;for &lt;/span&gt;the PATH changes to take effect.

Otherwise pipx is ready to go! ✨ 🌟 ✨

root@k0s-incus:~# pipx &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="s2"&gt;"https://github.com/robusta-dev/holmesgpt/archive/refs/heads/master.zip"&lt;/span&gt;
  installed package holmesgpt 0.1.0, installed using Python 3.12.3
  These apps are now globally available
    - holmes
&lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt; ✨ 🌟 ✨
root@k0s-incus:~# holmes version
/root/.local/share/pipx/venvs/holmesgpt/lib/python3.12/site-packages/pydantic/_internal/_config.py:345: UserWarning: Valid config keys have changed &lt;span class="k"&gt;in &lt;/span&gt;V2:
&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="s1"&gt;'fields'&lt;/span&gt; has been removed
  warnings.warn&lt;span class="o"&gt;(&lt;/span&gt;message, UserWarning&lt;span class="o"&gt;)&lt;/span&gt;
HEAD -&amp;gt; master-bfafbde3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pour l’accompagner, je récupère K9s qui fournit une interface utilisateur de terminal pour interagir avec vos clusters Kubernetes. L’objectif de ce projet est de faciliter la navigation, l’observation et la gestion de vos applications dans la nature. K9s surveille continuellement Kubernetes pour les changements et offre des commandes ultérieures pour interagir avec vos ressources observées.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/derailed/k9s" rel="noopener noreferrer"&gt;GitHub - derailed/k9s: 🐶 Kubernetes CLI To Manage Your Clusters In Style!&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@k0s-incus:~# wget &lt;span class="nt"&gt;-c&lt;/span&gt; https://github.com/derailed/k9s/releases/download/v0.32.7/k9s_linux_amd64.deb
HTTP request sent, awaiting response... 200 OK
Length: 31832132 &lt;span class="o"&gt;(&lt;/span&gt;30M&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;application/octet-stream]
Saving to: ‘k9s_linux_amd64.deb’

k9s_linux_amd64.deb 100%[&lt;span class="o"&gt;=====================================================================================================&amp;gt;]&lt;/span&gt; 30.36M &lt;span class="nt"&gt;--&lt;/span&gt;.-KB/s &lt;span class="k"&gt;in &lt;/span&gt;0.1s    

2025-01-14 21:40:07 &lt;span class="o"&gt;(&lt;/span&gt;291 MB/s&lt;span class="o"&gt;)&lt;/span&gt; - ‘k9s_linux_amd64.deb’ saved &lt;span class="o"&gt;[&lt;/span&gt;31832132/31832132]

root@k0s-incus:~# apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; ./k9s_linux_amd64.deb 
root@k0s-incus:~# k9s &lt;span class="nt"&gt;--help&lt;/span&gt;
K9s is a CLI to view and manage your Kubernetes clusters.

Usage:
  k9s &lt;span class="o"&gt;[&lt;/span&gt;flags]
  k9s &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;command&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;

Available Commands:
  completion Generate the autocompletion script &lt;span class="k"&gt;for &lt;/span&gt;the specified shell
  &lt;span class="nb"&gt;help &lt;/span&gt;Help about any &lt;span class="nb"&gt;command
  &lt;/span&gt;info List K9s configurations info
  version Print version/build info

Flags:
  &lt;span class="nt"&gt;-A&lt;/span&gt;, &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt; Launch K9s &lt;span class="k"&gt;in &lt;/span&gt;all namespaces
      &lt;span class="nt"&gt;--as&lt;/span&gt; string Username to impersonate &lt;span class="k"&gt;for &lt;/span&gt;the operation
      &lt;span class="nt"&gt;--as-group&lt;/span&gt; stringArray Group to impersonate &lt;span class="k"&gt;for &lt;/span&gt;the operation
      &lt;span class="nt"&gt;--certificate-authority&lt;/span&gt; string Path to a cert file &lt;span class="k"&gt;for &lt;/span&gt;the certificate authority
      &lt;span class="nt"&gt;--client-certificate&lt;/span&gt; string Path to a client certificate file &lt;span class="k"&gt;for &lt;/span&gt;TLS
      &lt;span class="nt"&gt;--client-key&lt;/span&gt; string Path to a client key file &lt;span class="k"&gt;for &lt;/span&gt;TLS
      &lt;span class="nt"&gt;--cluster&lt;/span&gt; string The name of the kubeconfig cluster to use
  &lt;span class="nt"&gt;-c&lt;/span&gt;, &lt;span class="nt"&gt;--command&lt;/span&gt; string Overrides the default resource to load when the application launches
      &lt;span class="nt"&gt;--context&lt;/span&gt; string The name of the kubeconfig context to use
      &lt;span class="nt"&gt;--crumbsless&lt;/span&gt; Turn K9s crumbs off
      &lt;span class="nt"&gt;--headless&lt;/span&gt; Turn K9s header off
  &lt;span class="nt"&gt;-h&lt;/span&gt;, &lt;span class="nt"&gt;--help&lt;/span&gt; &lt;span class="nb"&gt;help &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;k9s
      &lt;span class="nt"&gt;--insecure-skip-tls-verify&lt;/span&gt; If &lt;span class="nb"&gt;true&lt;/span&gt;, the server&lt;span class="s1"&gt;'s caCertFile will not be checked for validity
      --kubeconfig string Path to the kubeconfig file to use for CLI requests
      --logFile string Specify the log file (default "/root/.local/state/k9s/k9s.log")
  -l, --logLevel string Specify a log level (error, warn, info, debug, trace) (default "info")
      --logoless Turn K9s logo off
  -n, --namespace string If present, the namespace scope for this CLI request
      --readonly Sets readOnly mode by overriding readOnly configuration setting
  -r, --refresh int Specify the default refresh rate as an integer (sec) (default 2)
      --request-timeout string The length of time to wait before giving up on a single server request
      --screen-dump-dir string Sets a path to a dir for a screen dumps
      --token string Bearer token for authentication to the API server
      --user string The name of the kubeconfig user to use
      --write Sets write mode by overriding the readOnly configuration setting

Use "k9s [command] --help" for more information about a command.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwf222lqzp6o017dzbu2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwf222lqzp6o017dzbu2.png" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ollama, une alternative à ChatGPT, peut être déployée pour fournir des capacités de traitement du langage naturel directement dans votre environnement. Cela permet de bénéficier des capacités de traitement du langage naturel de Ollama sans dépendre de services cloud externes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;En intégrant Ollama à vos outils de dépannage, vous pouvez générer des réponses et des solutions basées sur l’analyse des logs et des données de votre cluster Kubernetes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Pour son exécution, je suis amené à utiliser RunPod, une plateforme qui permet d’exécuter des tâches de traitement du langage naturel et d’autres tâches IA. RunPod vous permet en effet de créer des environnements de pod personnalisés pour exécuter des modèles de langage comme Ollama ou d’autres applications IA :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.runpod.io/" rel="noopener noreferrer"&gt;RunPod - The Cloud Built for AI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Création d’un Pod GPU qui me permet donc d’utiliser Ollama …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.runpod.io/tutorials/pods/run-ollama" rel="noopener noreferrer"&gt;Set up Ollama on your GPU Pod | RunPod Documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F923x3tiqrc6pzf4euflj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F923x3tiqrc6pzf4euflj.png" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xkprezlq7a0f1ef0ea5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xkprezlq7a0f1ef0ea5.png" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Je peux m’y connecter via SSH :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Welcome to Ubuntu 22.04.3 LTS &lt;span class="o"&gt;(&lt;/span&gt;GNU/Linux 6.5.0-44-generic x86_64&lt;span class="o"&gt;)&lt;/span&gt;

 &lt;span class="k"&gt;*&lt;/span&gt; Documentation: https://help.ubuntu.com
 &lt;span class="k"&gt;*&lt;/span&gt; Management: https://landscape.canonical.com
 &lt;span class="k"&gt;*&lt;/span&gt; Support: https://ubuntu.com/advantage

This system has been minimized by removing packages and content that are
not required on a system that &lt;span class="nb"&gt;users &lt;/span&gt;&lt;span class="k"&gt;do &lt;/span&gt;not log into.

To restore this content, you can run the &lt;span class="s1"&gt;'unminimize'&lt;/span&gt; command.

The programs included with the Ubuntu system are free software&lt;span class="p"&gt;;&lt;/span&gt;
the exact distribution terms &lt;span class="k"&gt;for &lt;/span&gt;each program are described &lt;span class="k"&gt;in &lt;/span&gt;the
individual files &lt;span class="k"&gt;in&lt;/span&gt; /usr/share/doc/&lt;span class="k"&gt;*&lt;/span&gt;/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

 ____________ _
&lt;span class="o"&gt;(&lt;/span&gt;_____&lt;span class="se"&gt;\ &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;_____ &lt;span class="se"&gt;\ &lt;/span&gt;| |
 _____&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;)&lt;/span&gt; _ _____ _____&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;)&lt;/span&gt;___ __| |
| __/ | | | &lt;span class="o"&gt;||&lt;/span&gt; _ &lt;span class="se"&gt;\ &lt;/span&gt;|____ // _ &lt;span class="se"&gt;\ &lt;/span&gt;/ _ |
| | &lt;span class="se"&gt;\ \ &lt;/span&gt;| |_| &lt;span class="o"&gt;||&lt;/span&gt; | | &lt;span class="o"&gt;||&lt;/span&gt; | | |_| |&lt;span class="o"&gt;(&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;_| |
|_| |_|| ____/ |_| |_||_| &lt;span class="se"&gt;\_&lt;/span&gt;__ / &lt;span class="se"&gt;\ &lt;/span&gt;____ |

For detailed documentation and guides, please visit:
https://docs.runpod.io/ and https://blog.runpod.io/

root@5ed8df208cf4:~# nvidia-smi
Tue Jan 14 22:03:15 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.90.07 Driver Version: 550.90.07 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|&lt;span class="o"&gt;=========================================&lt;/span&gt;+&lt;span class="o"&gt;========================&lt;/span&gt;+&lt;span class="o"&gt;======================&lt;/span&gt;|
| 0 NVIDIA GeForce RTX 4070 Ti On | 00000000:81:00.0 Off | N/A |
| 0% 28C P8 11W / 285W | 2MiB / 12282MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|&lt;span class="o"&gt;=========================================================================================&lt;/span&gt;|
| No running processes found |
+-----------------------------------------------------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Exécution d’Ollama :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@5ed8df208cf4:~# apt update 2&amp;gt; /dev/null &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-qq&lt;/span&gt; lshw &lt;span class="nt"&gt;-y&lt;/span&gt; 2&amp;gt; /dev/null

root@5ed8df208cf4:~# &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OLLAMA_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.0.0.0:11434
root@5ed8df208cf4:~# &lt;span class="o"&gt;(&lt;/span&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://ollama.com/install.sh | sh &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; ollama serve &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; ollama.log 2&amp;gt;&amp;amp;1&lt;span class="o"&gt;)&lt;/span&gt; &amp;amp;
&lt;span class="o"&gt;[&lt;/span&gt;1] 950
root@5ed8df208cf4:~# &lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; Installing ollama to /usr/local
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; Downloading Linux amd64 bundle
&lt;span class="c"&gt;######################################################################## 100.0%&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; Creating ollama user...
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; Adding ollama user to video group...
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; Adding current user to ollama group...
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; Creating ollama systemd service...
WARNING: systemd is not running
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; NVIDIA GPU installed.
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; The Ollama API is now available at 127.0.0.1:11434.
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; Install complete. Run &lt;span class="s2"&gt;"ollama"&lt;/span&gt; from the &lt;span class="nb"&gt;command &lt;/span&gt;line.

root@5ed8df208cf4:~# netstat &lt;span class="nt"&gt;-tunlp&lt;/span&gt;
Active Internet connections &lt;span class="o"&gt;(&lt;/span&gt;only servers&lt;span class="o"&gt;)&lt;/span&gt;
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name    
tcp 0 0 0.0.0.0:7861 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 52/nginx: master &lt;span class="nb"&gt;pr 
&lt;/span&gt;tcp 0 0 0.0.0.0:8081 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 52/nginx: master &lt;span class="nb"&gt;pr 
&lt;/span&gt;tcp 0 0 0.0.0.0:8001 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 52/nginx: master &lt;span class="nb"&gt;pr 
&lt;/span&gt;tcp 0 0 0.0.0.0:22 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 70/sshd: /usr/sbin/ 
tcp 0 0 0.0.0.0:3001 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 52/nginx: master &lt;span class="nb"&gt;pr 
&lt;/span&gt;tcp 0 0 0.0.0.0:9091 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 52/nginx: master &lt;span class="nb"&gt;pr 
&lt;/span&gt;tcp 0 0 127.0.0.11:39145 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN -                   
tcp6 0 0 :::22 :::&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 70/sshd: /usr/sbin/ 
tcp6 0 0 :::11434 :::&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 1006/ollama         
udp 0 0 127.0.0.11:33663 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Récupération d’un LLM avec Llama3.2 :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ollama.com/library/llama3.2" rel="noopener noreferrer"&gt;llama3.2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/" rel="noopener noreferrer"&gt;Llama 3.2: Revolutionizing edge AI and vision with open, customizable models&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@5ed8df208cf4:~# ollama pull llama3.2:3b-instruct-q4_K_S
pulling manifest 
pulling d5e517daeee4... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.9 GB                         
pulling 966de95ca8a6... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.4 KB                         
pulling fcc5a6bec9da... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 7.7 KB                         
pulling a70ff7e570d9... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 6.0 KB                         
pulling 56bb8bd477a5... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 96 B                         
pulling 9c65e8607c0c... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 561 B                         
verifying sha256 digest 
writing manifest 
success 

root@5ed8df208cf4:~# ollama list       
NAME ID SIZE MODIFIED       
llama3.2:3b-instruct-q4_K_S 80f2089878c9 1.9 GB 31 seconds ago
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;L’endpoint d’Ollama est disponible publiquement via le proxy offert par RunPod :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5sm3yhsnur0o6g0756ul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5sm3yhsnur0o6g0756ul.png" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Modification de K9s pour y intégrer cet endpoint et HolmesGPT sous forme d’un plug-in :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;root@k0s-incus:~# cat ~/.config/k9s/plugins.yaml&lt;/span&gt;
&lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;holmesgpt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;shortCut&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Shift-H&lt;/span&gt;
    &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ask HolmesGPT&lt;/span&gt;
    &lt;span class="na"&gt;scopes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bash&lt;/span&gt;
    &lt;span class="na"&gt;background&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;confirm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-c&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;holmes ask "why is $NAME of $RESOURCE_NAME in -n $NAMESPACE not working as expected" --model="openai/llama3.2:3b-instruct-q4_K_S"&lt;/span&gt;
        &lt;span class="s"&gt;echo "Press 'q' to exit"&lt;/span&gt;
        &lt;span class="s"&gt;while : ; do&lt;/span&gt;
        &lt;span class="s"&gt;read -n 1 k &amp;lt;&amp;amp;1&lt;/span&gt;
        &lt;span class="s"&gt;if [[$k = q]] ; then&lt;/span&gt;
        &lt;span class="s"&gt;break&lt;/span&gt;
        &lt;span class="s"&gt;fi&lt;/span&gt;
        &lt;span class="s"&gt;done&lt;/span&gt;

&lt;span class="s"&gt;root@k0s-incus:~# export OPENAI_API_BASE="https://vsr6spvysc6jly-11434.proxy.runpod.net/v1"&lt;/span&gt;
&lt;span class="s"&gt;root@k0s-incus:~# export OPENAI_API_KEY=123&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Déploiement d’un exemple de Pod problématique dans le cluster Kubernetes via les exemples fournis par Robusta :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@k0s-incus:~# kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/robusta-dev/kubernetes-demos/main/crashpod/broken.yaml
deployment.apps/payment-processing-worker created

root@k0s-incus:~# kubectl get po
NAME READY STATUS RESTARTS AGE
payment-processing-worker-747ccfb9db-njgmw 0/1 CrashLoopBackOff 1 &lt;span class="o"&gt;(&lt;/span&gt;4s ago&lt;span class="o"&gt;)&lt;/span&gt; 9s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Je peux lancer la requête relative au plug-in avec HolmesGPT et la combinaison CTRL+H pour obtenir cette première réponse :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsijr2zwrjorx8lf05l3t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsijr2zwrjorx8lf05l3t.png" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foaezxruimosuykncefuw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foaezxruimosuykncefuw.png" width="800" height="417"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;The payment-processing-container container has crashed and is being restarted &lt;span class="k"&gt;for &lt;/span&gt;the 6th &lt;span class="nb"&gt;time &lt;/span&gt;due to a CrashLoopBackOff. The last state indicates that the container was terminated with an  
&lt;span class="nb"&gt;exit &lt;/span&gt;code of 0, which suggests that the &lt;span class="nb"&gt;command &lt;/span&gt;executed successfully but did not &lt;span class="nb"&gt;complete &lt;/span&gt;as expected.                                                                                       

To investigate further, you can check the logs of the payment-processing-container container to see &lt;span class="k"&gt;if &lt;/span&gt;there are any error messages or clues about what is causing the issue. You can also    
check the Kubernetes events &lt;span class="k"&gt;for &lt;/span&gt;any other errors or warnings that may be related to this issue.                                                                                               

Additionally, you can try to debug the &lt;span class="nb"&gt;command &lt;/span&gt;executed by the payment-processing-container container to see &lt;span class="k"&gt;if &lt;/span&gt;it&lt;span class="s1"&gt;'s correct and working as expected. The command is:                         


 if [[-z "${DEPLOY_ENV}"]]; then echo Environment variable DEPLOY_ENV is undefined ; else while true; do echo hello; sleep 10;done; fi

This command checks if the DEPLOY_ENV environment variable is set, and if it'&lt;/span&gt;s not, it prints a message. If it is &lt;span class="nb"&gt;set&lt;/span&gt;, it enters an infinite loop that prints &lt;span class="s2"&gt;"hello"&lt;/span&gt; every 10 seconds.       

If you&lt;span class="s1"&gt;'re running this container in a Kubernetes pod, you can try to debug the issue by checking the pod'&lt;/span&gt;s logs or using a tool like kubectl to inspect the container&lt;span class="s1"&gt;'s state and logs.       
Press '&lt;/span&gt;q&lt;span class="s1"&gt;' to exit

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Modification de la requête et autre réponse :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;root@k0s-incus:~# cat ~/.config/k9s/plugins.yaml&lt;/span&gt;
&lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;holmesgpt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;shortCut&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Shift-H&lt;/span&gt;
    &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ask HolmesGPT&lt;/span&gt;
    &lt;span class="na"&gt;scopes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bash&lt;/span&gt;
    &lt;span class="na"&gt;background&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;confirm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-c&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;holmes ask "why is $NAME of $RESOURCE_NAME in -n $NAMESPACE not working and why $NAME is crashed?" --model="openai/llama3.2:3b-instruct-q4_K_S"&lt;/span&gt;
        &lt;span class="s"&gt;echo "Press 'q' to exit"&lt;/span&gt;
        &lt;span class="s"&gt;while : ; do&lt;/span&gt;
        &lt;span class="s"&gt;read -n 1 k &amp;lt;&amp;amp;1&lt;/span&gt;
        &lt;span class="s"&gt;if [[$k = q]] ; then&lt;/span&gt;
        &lt;span class="s"&gt;break&lt;/span&gt;
        &lt;span class="s"&gt;fi&lt;/span&gt;
        &lt;span class="s"&gt;done&lt;/span&gt;

&lt;span class="s"&gt;The payment-processing-container container has crashed and is being restarted for the 6th time due to a CrashLoopBackOff. The last state indicates that the container was terminated with an&lt;/span&gt;  
&lt;span class="s"&gt;exit code of 0, which suggests that the command executed successfully but did not produce any output.&lt;/span&gt;                                                                                         

&lt;span class="s"&gt;To investigate further, you can check the logs of the payment-processing-container container to see if there are any error messages or clues about what is causing the crash&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;                 


 &lt;span class="s"&gt;kubectl logs payment-processing-worker-747ccfb9db-njgmw -c payment-processing-container&lt;/span&gt;

&lt;span class="s"&gt;Additionally, you can check the configuration of the payment-processing-container container to ensure that it is running with the correct environment variables and settings.&lt;/span&gt;                 


 &lt;span class="s"&gt;kubectl describe pod payment-processing-worker-747ccfb9db-njgmw -c payment-processing-container&lt;/span&gt;

&lt;span class="s"&gt;This will provide more detailed information about the container's configuration and any errors that may be occurring.&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqn20ne98a66zirzqt1p.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqn20ne98a66zirzqt1p.jpg" width="582" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;HolmesGPT peut s’intégrer plus globalement à la plateforme Robusta via une installation dans le cluster Kubernetes et Helm …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.robusta.dev/master/configuration/ai-analysis.html#ai-analysis" rel="noopener noreferrer"&gt;AI Analysis - Robusta documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pour cela génération du fichier YAML de ce type en configuration :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;root@k0s-incus:~# cat generated_values.yaml&lt;/span&gt; 
&lt;span class="na"&gt;globalConfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;signing_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;568927d5-6e65-4c13-b3fe-fdc50e616fde&lt;/span&gt;
  &lt;span class="na"&gt;account_id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;a4d7cea6-fba3-4ce6-ba3d-941b55ec83db&lt;/span&gt;
&lt;span class="na"&gt;sinksConfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;robusta_sink&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;robusta_ui_sink&lt;/span&gt;
      &lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;TOKEN&amp;gt;&lt;/span&gt;
&lt;span class="na"&gt;enablePrometheusStack&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;kube-prometheus-stack&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;grafana&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;persistence&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;enablePlatformPlaybooks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;runner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;sendAdditionalTelemetry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;enableHolmesGPT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;holmes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;additionalEnvVars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ROBUSTA_AI&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94g2rh49y7rlmz6dn04z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94g2rh49y7rlmz6dn04z.png" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9sqitxrv1m1n9lm00yu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9sqitxrv1m1n9lm00yu.png" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Utilisation des commandes et du fichier de configuration YAML fournis par la plateforme Robusta :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@k0s-incus:~# helm repo add robusta https://robusta-charts.storage.googleapis.com &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; helm repo update
&lt;span class="s2"&gt;"robusta"&lt;/span&gt; has been added to your repositories
Hang tight &lt;span class="k"&gt;while &lt;/span&gt;we grab the latest from your chart repositories...
...Successfully got an update from the &lt;span class="s2"&gt;"robusta"&lt;/span&gt; chart repository
Update Complete. ⎈Happy Helming!⎈
root@k0s-incus:~# helm &lt;span class="nb"&gt;install &lt;/span&gt;robusta robusta/robusta &lt;span class="nt"&gt;-f&lt;/span&gt; ./generated_values.yaml &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;clusterName&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"k0s-cluster"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;isSmallCluster&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--set&lt;/span&gt; holmes.resources.requests.memory&lt;span class="o"&gt;=&lt;/span&gt;512Mi &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--set&lt;/span&gt; kube-prometheus-stack.prometheus.prometheusSpec.retentionSize&lt;span class="o"&gt;=&lt;/span&gt;9GB &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--set&lt;/span&gt; kube-prometheus-stack.prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storage&lt;span class="o"&gt;=&lt;/span&gt;10Gi &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--set&lt;/span&gt; kube-prometheus-stack.prometheus.prometheusSpec.resources.requests.memory&lt;span class="o"&gt;=&lt;/span&gt;512Mi
NAME: robusta
LAST DEPLOYED: Tue Jan 14 22:59:09 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
Thank you &lt;span class="k"&gt;for &lt;/span&gt;installing Robusta 0.20.0

As an open &lt;span class="nb"&gt;source &lt;/span&gt;project, we collect general usage statistics.
This data is extremely limited and contains only general metadata to &lt;span class="nb"&gt;help &lt;/span&gt;us understand usage patterns.
If you are willing to share additional data, please &lt;span class="k"&gt;do &lt;/span&gt;so! It really &lt;span class="nb"&gt;help &lt;/span&gt;us improve Robusta.

You can &lt;span class="nb"&gt;set &lt;/span&gt;sendAdditionalTelemetry: &lt;span class="nb"&gt;true &lt;/span&gt;as a Helm value to send exception reports and additional data.
This is disabled by default.

To opt-out of telemetry entirely, &lt;span class="nb"&gt;set &lt;/span&gt;a &lt;span class="nv"&gt;ENABLE_TELEMETRY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false &lt;/span&gt;environment variable on the robusta-runner deployment.
Note that &lt;span class="k"&gt;if &lt;/span&gt;the Robusta UI is enabled, telemetry cannot be disabled even &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="nv"&gt;ENABLE_TELEMETRY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false &lt;/span&gt;is set.

Visit the web UI at: https://platform.robusta.dev/

root@k0s-incus:~# helm &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-A&lt;/span&gt;
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
robusta default 2 2025-01-14 23:10:13.935906491 +0000 UTC deployed robusta-0.20.0 0.20.0

root@k0s-incus:~# kubectl get po,svc
NAME READY STATUS RESTARTS AGE
pod/alertmanager-robusta-kube-prometheus-st-alertmanager-0 0/2 Pending 0 2m
pod/payment-processing-worker-747ccfb9db-njgmw 0/1 CrashLoopBackOff 10 &lt;span class="o"&gt;(&lt;/span&gt;2m33s ago&lt;span class="o"&gt;)&lt;/span&gt; 28m
pod/prometheus-robusta-kube-prometheus-st-prometheus-0 0/2 Pending 0 2m
pod/robusta-forwarder-cd847ccc-wxc6d 1/1 Running 0 2m5s
pod/robusta-grafana-8588b8fb85-fv5vj 3/3 Running 0 2m5s
pod/robusta-holmes-55dd58ff6d-m4zth 1/1 Running 0 2m5s
pod/robusta-kube-prometheus-st-operator-6885c8f675-szncg 1/1 Running 0 2m5s
pod/robusta-kube-state-metrics-8667fd9775-s49z4 1/1 Running 0 2m5s
pod/robusta-prometheus-node-exporter-c6jvb 1/1 Running 0 2m5s
pod/robusta-prometheus-node-exporter-j6zp5 1/1 Running 0 2m5s
pod/robusta-runner-5d667b7d9c-dm2z7 1/1 Running 0 2m5s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
service/alertmanager-operated ClusterIP None &amp;lt;none&amp;gt; 9093/TCP,9094/TCP,9094/UDP 2m1s
service/kubernetes ClusterIP 10.96.0.1 &amp;lt;none&amp;gt; 443/TCP 94m
service/prometheus-operated ClusterIP None &amp;lt;none&amp;gt; 9090/TCP 2m1s
service/robusta-forwarder ClusterIP 10.102.7.41 &amp;lt;none&amp;gt; 80/TCP 2m5s
service/robusta-grafana ClusterIP 10.106.69.72 &amp;lt;none&amp;gt; 80/TCP 2m5s
service/robusta-holmes ClusterIP 10.110.124.241 &amp;lt;none&amp;gt; 80/TCP 2m5s
service/robusta-kube-prometheus-st-alertmanager ClusterIP 10.105.101.210 &amp;lt;none&amp;gt; 9093/TCP,8080/TCP 2m5s
service/robusta-kube-prometheus-st-operator ClusterIP 10.103.213.208 &amp;lt;none&amp;gt; 443/TCP 2m5s
service/robusta-kube-prometheus-st-prometheus ClusterIP 10.107.13.104 &amp;lt;none&amp;gt; 9090/TCP,8080/TCP 2m5s
service/robusta-kube-state-metrics ClusterIP 10.103.53.30 &amp;lt;none&amp;gt; 8080/TCP 2m5s
service/robusta-prometheus-node-exporter ClusterIP 10.102.243.65 &amp;lt;none&amp;gt; 9104/TCP 2m5s
service/robusta-runner ClusterIP 10.97.82.15 &amp;lt;none&amp;gt; 80/TCP 2m5s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Je peux procéder à l’installation complête via cette formule :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@k0s-incus:~# helm upgrade robusta robusta/robusta &lt;span class="nt"&gt;-f&lt;/span&gt; ./generated_values.yaml &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;clusterName&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"k0s-cluster"&lt;/span&gt;
Release &lt;span class="s2"&gt;"robusta"&lt;/span&gt; has been upgraded. Happy Helming!
NAME: robusta
LAST DEPLOYED: Tue Jan 14 23:14:02 2025
NAMESPACE: default
STATUS: deployed
REVISION: 5
NOTES:
Thank you &lt;span class="k"&gt;for &lt;/span&gt;installing Robusta 0.20.0

As an open &lt;span class="nb"&gt;source &lt;/span&gt;project, we collect general usage statistics.
This data is extremely limited and contains only general metadata to &lt;span class="nb"&gt;help &lt;/span&gt;us understand usage patterns.
If you are willing to share additional data, please &lt;span class="k"&gt;do &lt;/span&gt;so! It really &lt;span class="nb"&gt;help &lt;/span&gt;us improve Robusta.

You can &lt;span class="nb"&gt;set &lt;/span&gt;sendAdditionalTelemetry: &lt;span class="nb"&gt;true &lt;/span&gt;as a Helm value to send exception reports and additional data.
This is disabled by default.

To opt-out of telemetry entirely, &lt;span class="nb"&gt;set &lt;/span&gt;a &lt;span class="nv"&gt;ENABLE_TELEMETRY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false &lt;/span&gt;environment variable on the robusta-runner deployment.
Note that &lt;span class="k"&gt;if &lt;/span&gt;the Robusta UI is enabled, telemetry cannot be disabled even &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="nv"&gt;ENABLE_TELEMETRY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false &lt;/span&gt;is set.

Visit the web UI at: https://platform.robusta.dev/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Le cluster apparaît sur Robusta :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ficw4j7jzoe4ehrbrs2xy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ficw4j7jzoe4ehrbrs2xy.png" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Et là également via HolmesGPT, intterogation de la plateforme sur les éventuelles problématiques rencontrées dans le cluster Kubernetes :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8in3sgby36e2fykc3msi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8in3sgby36e2fykc3msi.png" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3i7fyoioogyj5ghl6fq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3i7fyoioogyj5ghl6fq.png" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqezsha0632tfvou0kuj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqezsha0632tfvou0kuj.png" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhi4ei1otp229eztccxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhi4ei1otp229eztccxg.png" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Le tout avec une consommation moindre dans le cluster …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0238q9d64qdp2kwnh4i4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0238q9d64qdp2kwnh4i4.png" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;L’utilisation de l’IA pour le dépannage et l’analyse des incidents réduit le temps et l’effort humain nécessaire, permettant aux équipes de se concentrer sur des tâches plus stratégiques.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Les outils comme HolmesGPT et Ollama peuvent être mis à l’échelle en fonction de la demande, ce qui est particulièrement utile dans les environnements de production où la charge de travail peut varier significativement.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;On peut donc en conclure que l’intégration de l’IA dans les clusters Kubernetes à l’aide d’outils comme HolmesGPT, Ollama et de fournisseur d’instances GPU comme RunPod, offre des avantages significatifs en termes d’efficiacité, de scalabilité et de tolérance aux pannes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ces technologies permettent de rationaliser le cycle de vie des applications, de simplifier le dépannage et d’améliorer la gestion des ressources, rendant ainsi les opérations Kubernetes plus robustes et plus performantes …&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqamrcp4ch25cpk7tsp68.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqamrcp4ch25cpk7tsp68.jpg" width="500" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;À suivre !&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>ai</category>
      <category>containers</category>
      <category>devops</category>
    </item>
    <item>
      <title>Implémenter le Retrieval Augmented Generation (RAG) de manière privée sur Kubernetes avec KubeAI et…</title>
      <dc:creator>Karim</dc:creator>
      <pubDate>Mon, 30 Dec 2024 18:01:16 +0000</pubDate>
      <link>https://forem.com/deep75/implementer-le-retrieval-augmented-generation-rag-de-maniere-privee-sur-kubernetes-avec-kubeai-et-3dni</link>
      <guid>https://forem.com/deep75/implementer-le-retrieval-augmented-generation-rag-de-maniere-privee-sur-kubernetes-avec-kubeai-et-3dni</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2o6priydsa65wat4t8fv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2o6priydsa65wat4t8fv.png" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;L’utilisation combinée de KubeAI et Weaviate offre une solution puissante pour implémenter le Retrieval Augmented Generation (RAG) de manière privée et évolutive sur Kubernetes. Dans cet article, nous allons explorer comment mettre en place cette architecture pour créer des applications d’IA avancées.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;KubeAI est une plateforme qui permet de déployer des modèles d’IA open source sur Kubernetes, offrant une alternative privée aux services d’IA dans le cloud&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://www.kubeai.org/" rel="noopener noreferrer"&gt;KubeAI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;KubeAI utilise une API HTTP compatible avec OpenAI et peut être considéré comme un opérateur de modèlequi gère les serveurs vLLM et Ollama.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fziakh9ydpwdjg2so9t2x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fziakh9ydpwdjg2so9t2x.png" width="800" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pour commencer, il faut déployer KubeAI sur un cluster Kubernetes. Voici les étapes principales : je pars d’une instance Ubuntu 24.04 LTS sur DigitalOcean mettant en oeuvre des processeurs CPU dédiés premium d’Intel …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.digitalocean.com/blog/introducing-premium-cpu-optimized-droplets" rel="noopener noreferrer"&gt;Introducing Premium CPU-Optimized Droplets | DigitalOcean&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkn018uzj3yeof7yy0x5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkn018uzj3yeof7yy0x5.png" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Et j’y installe de nouveau k3s pour former mon cluster Kubernetes local avec les clients nécessaires (kubectl et helm) :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://k3s.io/" rel="noopener noreferrer"&gt;K3s&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@kubeai:~# snap &lt;span class="nb"&gt;install &lt;/span&gt;kubectl &lt;span class="nt"&gt;--classic&lt;/span&gt;
2024-12-30T13:37:55Z INFO Waiting &lt;span class="k"&gt;for &lt;/span&gt;automatic snapd restart...
kubectl 1.31.4 from Canonical✓ installed
root@kubeai:~# snap &lt;span class="nb"&gt;install &lt;/span&gt;helm &lt;span class="nt"&gt;--classic&lt;/span&gt;
helm 3.16.4 from Snapcrafters✪ installed
root@kubeai:~# &lt;span class="nb"&gt;type &lt;/span&gt;kubectl &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;type &lt;/span&gt;helm
kubectl is /snap/bin/kubectl
helm is /snap/bin/helm

root@kubeai:~# curl &lt;span class="nt"&gt;-sfL&lt;/span&gt; https://get.k3s.io | sh -
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Finding release &lt;span class="k"&gt;for &lt;/span&gt;channel stable
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Using v1.31.4+k3s1 as release
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Downloading &lt;span class="nb"&gt;hash &lt;/span&gt;https://github.com/k3s-io/k3s/releases/download/v1.31.4+k3s1/sha256sum-amd64.txt
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.31.4+k3s1/k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Verifying binary download
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Installing k3s to /usr/local/bin/k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Skipping installation of SELinux RPM
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Skipping /usr/local/bin/kubectl symlink to k3s, &lt;span class="nb"&gt;command &lt;/span&gt;exists &lt;span class="k"&gt;in &lt;/span&gt;PATH at /snap/bin/kubectl
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating /usr/local/bin/crictl symlink to k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating /usr/local/bin/ctr symlink to k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating killall script /usr/local/bin/k3s-killall.sh
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
&lt;span class="o"&gt;[&lt;/span&gt;INFO] &lt;span class="nb"&gt;env&lt;/span&gt;: Creating environment file /etc/systemd/system/k3s.service.env
&lt;span class="o"&gt;[&lt;/span&gt;INFO] systemd: Creating service file /etc/systemd/system/k3s.service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
&lt;span class="o"&gt;[&lt;/span&gt;INFO] systemd: Starting k3s

root@kubeai:~# &lt;span class="nb"&gt;mkdir&lt;/span&gt; .kube &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cp&lt;/span&gt; /etc/rancher/k3s/k3s.yaml ~/.kube/config &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chmod &lt;/span&gt;600 ~/.kube/config
root@kubeai:~# helm &lt;span class="nb"&gt;ls
&lt;/span&gt;NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
root@kubeai:~# kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy

To further debug and diagnose cluster problems, use &lt;span class="s1"&gt;'kubectl cluster-info dump'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
root@kubeai:~# kubectl get po,svc &lt;span class="nt"&gt;-A&lt;/span&gt;
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-ccb96694c-l6mzp 1/1 Running 0 3m51s
kube-system pod/helm-install-traefik-crd-462cg 0/1 Completed 0 3m51s
kube-system pod/helm-install-traefik-jbx28 0/1 Completed 1 3m51s
kube-system pod/local-path-provisioner-5cf85fd84d-hkdkl 1/1 Running 0 3m51s
kube-system pod/metrics-server-5985cbc9d7-vjmrj 1/1 Running 0 3m51s
kube-system pod/svclb-traefik-75ae73e0-5s2fq 2/2 Running 0 3m43s
kube-system pod/traefik-57b79cf995-b662r 1/1 Running 0 3m43s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
default service/kubernetes ClusterIP 10.43.0.1 &amp;lt;none&amp;gt; 443/TCP 3m58s
kube-system service/kube-dns ClusterIP 10.43.0.10 &amp;lt;none&amp;gt; 53/UDP,53/TCP,9153/TCP 3m53s
kube-system service/metrics-server ClusterIP 10.43.49.80 &amp;lt;none&amp;gt; 443/TCP 3m53s
kube-system service/traefik LoadBalancer 10.43.140.130 164.92.248.129 80:32667/TCP,443:32508/TCP 3m43s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4x06410zbxnjqjkw9zs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4x06410zbxnjqjkw9zs.jpg" width="535" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;KubeAI offre une compatibilité avec certains API d’OpenAI :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Implemented #
/v1/chat/completions
/v1/completions
/v1/embeddings
/v1/models
/v1/audio/transcriptions

# Planned #
# /v1/assistants/*
# /v1/batches/*
# /v1/fine_tuning/*
# /v1/images/*
# /v1/vector_stores/*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Je procède au déploiement de KubeAI via Helm en définissant les modèles nécessaires, notamment un modèle d’embedding et un modèle génératif (Le modèle d’intégration nomique est utilisé à la place de text-embedding-ada-002 et Google Gemma 2-2B est utilisé à la place de GPT-3.5-turbo) :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://huggingface.co/google/gemma-2-2b" rel="noopener noreferrer"&gt;google/gemma-2-2b · Hugging Face&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ollama.com/library/gemma2:2b" rel="noopener noreferrer"&gt;gemma2:2b&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;root@kubeai:~# cat kubeai-model-values.yaml&lt;/span&gt; 
&lt;span class="na"&gt;catalog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;text-embedding-ada-002&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;minReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;features&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;TextEmbedding"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nomic&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ollama://nomic-embed-text"&lt;/span&gt;
    &lt;span class="na"&gt;engine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;OLlama&lt;/span&gt;
    &lt;span class="na"&gt;resourceProfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cpu:1&lt;/span&gt;
  &lt;span class="na"&gt;gpt-3.5-turbo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;minReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;features&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;TextGeneration"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;google&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ollama://gemma2:2b"&lt;/span&gt;
    &lt;span class="na"&gt;engine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;OLlama&lt;/span&gt;
    &lt;span class="na"&gt;resourceProfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cpu:4&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@kubeai:~# helm repo add kubeai https://www.kubeai.org &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; helm repo update
&lt;span class="s2"&gt;"kubeai"&lt;/span&gt; has been added to your repositories
Hang tight &lt;span class="k"&gt;while &lt;/span&gt;we grab the latest from your chart repositories...
...Successfully got an update from the &lt;span class="s2"&gt;"kubeai"&lt;/span&gt; chart repository
Update Complete. ⎈Happy Helming!⎈
root@kubeai:~# helm &lt;span class="nb"&gt;install &lt;/span&gt;kubeai kubeai/kubeai
NAME: kubeai
LAST DEPLOYED: Mon Dec 30 13:47:00 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

root@kubeai:~# helm &lt;span class="nb"&gt;install &lt;/span&gt;kubeai-models kubeai/models &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-f&lt;/span&gt; ./kubeai-model-values.yaml
NAME: kubeai-models
LAST DEPLOYED: Mon Dec 30 13:47:11 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

root@kubeai:~# kubectl get po,svc
NAME READY STATUS RESTARTS AGE
pod/kubeai-6b7b6866fb-26qwq 1/1 Running 0 3m1s
pod/model-gpt-3.5-turbo-545cf68d8d-v97gs 1/1 Running 0 27s
pod/model-text-embedding-ada-002-64dc467cf4-8m64j 1/1 Running 0 30s
pod/openwebui-55d54bd69-k4ttm 1/1 Running 0 3m1s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
service/kubeai ClusterIP 10.43.24.35 &amp;lt;none&amp;gt; 80/TCP,8080/TCP 3m1s
service/kubernetes ClusterIP 10.43.0.1 &amp;lt;none&amp;gt; 443/TCP 9m12s
service/openwebui ClusterIP 10.43.107.76 &amp;lt;none&amp;gt; 80/TCP 3m1s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ollama est bien présent avec KubeAI :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;weaviate&lt;span class="o"&gt;)&lt;/span&gt; root@kubeai:~# kubectl logs pod/model-gpt-3.5-turbo-545cf68d8d-v97gs
Couldn&lt;span class="s1"&gt;'t find '&lt;/span&gt;/root/.ollama/id_ed25519&lt;span class="s1"&gt;'. Generating new private key.
Your new public key is: 

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAGHk39koguimeZFYBSjv9LDxjj5vZRjFmwznLdXSWUV

2024/12/30 13:49:36 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:8000 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:999999h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-12-30T13:49:36.768Z level=INFO source=images.go:757 msg="total blobs: 0"
time=2024-12-30T13:49:36.768Z level=INFO source=images.go:764 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env: export GIN_MODE=release
 - using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST /api/pull --&amp;gt; github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --&amp;gt; github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --&amp;gt; github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --&amp;gt; github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --&amp;gt; github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --&amp;gt; github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --&amp;gt; github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --&amp;gt; github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --&amp;gt; github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --&amp;gt; github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --&amp;gt; github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --&amp;gt; github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --&amp;gt; github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --&amp;gt; github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Weaviate est un moteur de recherche vectoriel qui peut s’intégrer de manière transparente aux modèles d’intégration et de génération de KubeAI. Je vais utiliser KubeAI comme point de terminaison OpenAI pour Weaviate :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://weaviate.io/developers/weaviate/model-providers/kubeai" rel="noopener noreferrer"&gt;KubeAI + Weaviate | Weaviate&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Création d’un fichier nommé &lt;strong&gt;&lt;em&gt;weaviate-values.yaml&lt;/em&gt;&lt;/strong&gt; avec le contenu suivant pour l’installation de Weaviate via Helm (et activer les modules &lt;em&gt;text2vec-openai&lt;/em&gt; et &lt;em&gt;generative-openai ;&lt;/em&gt; apiKeyest ignoré dans ce cas car on utilise ici KubeAI comme point de terminaison OpenAI) :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;root@kubeai:~# cat weaviate-values.yaml&lt;/span&gt; 
&lt;span class="na"&gt;modules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;text2vec-openai&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;thisIsIgnored&lt;/span&gt;
  &lt;span class="na"&gt;generative-openai&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;thisIsIgnored&lt;/span&gt;
  &lt;span class="na"&gt;default_vectorizer_module&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;text2vec-openai&lt;/span&gt;
&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# To prevent Weaviate being exposed publicly&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;

&lt;span class="s"&gt;root@kubeai:~# helm repo add weaviate https://weaviate.github.io/weaviate-helm &amp;amp;&amp;amp; helm repo update&lt;/span&gt;
&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;weaviate"&lt;/span&gt; &lt;span class="s"&gt;has been added to your repositories&lt;/span&gt;
&lt;span class="s"&gt;Hang tight while we grab the latest from your chart repositories...&lt;/span&gt;
&lt;span class="s"&gt;...Successfully got an update from the "weaviate" chart repository&lt;/span&gt;
&lt;span class="s"&gt;...Successfully got an update from the "kubeai" chart repository&lt;/span&gt;
&lt;span class="s"&gt;Update Complete. ⎈Happy Helming!⎈&lt;/span&gt;

&lt;span class="s"&gt;root@kubeai:~# helm install \&lt;/span&gt;
  &lt;span class="s"&gt;"weaviate" \&lt;/span&gt;
  &lt;span class="s"&gt;weaviate/weaviate \&lt;/span&gt;
  &lt;span class="s"&gt;-f weaviate-values.yaml&lt;/span&gt;
&lt;span class="na"&gt;NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;weaviate&lt;/span&gt;
&lt;span class="na"&gt;LAST DEPLOYED&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Mon Dec 30 13:51:48 &lt;/span&gt;&lt;span class="m"&gt;2024&lt;/span&gt;
&lt;span class="na"&gt;NAMESPACE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;STATUS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deployed&lt;/span&gt;
&lt;span class="na"&gt;REVISION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="na"&gt;TEST SUITE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;None&lt;/span&gt;

&lt;span class="s"&gt;root@kubeai:~# kubectl get po,svc&lt;/span&gt;
&lt;span class="s"&gt;NAME READY STATUS RESTARTS AGE&lt;/span&gt;
&lt;span class="s"&gt;pod/kubeai-6b7b6866fb-26qwq 1/1 Running 0 5m18s&lt;/span&gt;
&lt;span class="s"&gt;pod/model-gpt-3.5-turbo-545cf68d8d-v97gs 1/1 Running 0 2m44s&lt;/span&gt;
&lt;span class="s"&gt;pod/model-text-embedding-ada-002-64dc467cf4-8m64j 1/1 Running 0 2m47s&lt;/span&gt;
&lt;span class="s"&gt;pod/openwebui-55d54bd69-k4ttm 1/1 Running 0 5m18s&lt;/span&gt;
&lt;span class="s"&gt;pod/weaviate-0 1/1 Running 0 31s&lt;/span&gt;

&lt;span class="s"&gt;NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE&lt;/span&gt;
&lt;span class="s"&gt;service/kubeai ClusterIP 10.43.24.35 &amp;lt;none&amp;gt; 80/TCP,8080/TCP 5m18s&lt;/span&gt;
&lt;span class="s"&gt;service/kubernetes ClusterIP 10.43.0.1 &amp;lt;none&amp;gt; 443/TCP 11m&lt;/span&gt;
&lt;span class="s"&gt;service/openwebui ClusterIP 10.43.107.76 &amp;lt;none&amp;gt; 80/TCP 5m18s&lt;/span&gt;
&lt;span class="s"&gt;service/weaviate ClusterIP 10.43.211.85 &amp;lt;none&amp;gt; 80/TCP 31s&lt;/span&gt;
&lt;span class="s"&gt;service/weaviate-grpc LoadBalancer 10.43.37.63 164.92.248.129 50051:31360/TCP 31s&lt;/span&gt;
&lt;span class="s"&gt;service/weaviate-headless ClusterIP None &amp;lt;none&amp;gt; 80/TCP 31s&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7mw0or4p8v17j2fhwwt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7mw0or4p8v17j2fhwwt.png" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Je vais exécuter des requêtes en Python pour interagir avec Weaviate. Installation de l’environnement nécessaire via &lt;strong&gt;uv&lt;/strong&gt; (gestionnaire de paquets et de projets pour Python extrêmement rapide et écrit en Rust) :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/astral-sh/uv" rel="noopener noreferrer"&gt;GitHub - astral-sh/uv: An extremely fast Python package and project manager, written in Rust.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.astral.sh/uv/" rel="noopener noreferrer"&gt;uv&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@kubeai:~# curl &lt;span class="nt"&gt;-LsSf&lt;/span&gt; https://astral.sh/uv/install.sh | sh
downloading uv 0.5.13 x86_64-unknown-linux-gnu
no checksums to verify
installing to /root/.local/bin
  uv
  uvx
everything&lt;span class="s1"&gt;'s installed!

To add $HOME/.local/bin to your PATH, either restart your shell or run:

    source $HOME/.local/bin/env (sh, bash, zsh)
    source $HOME/.local/bin/env.fish (fish)

root@kubeai:~# source $HOME/.local/bin/env

root@kubeai:~# uv
An extremely fast Python package manager.

Usage: uv [OPTIONS] &amp;lt;COMMAND&amp;gt;

Commands:
  run Run a command or script
  init Create a new project
  add Add dependencies to the project
  remove Remove dependencies from the project
  sync Update the project'&lt;/span&gt;s environment
  lock Update the project&lt;span class="s1"&gt;'s lockfile
  export Export the project'&lt;/span&gt;s lockfile to an alternate format
  tree Display the project&lt;span class="s1"&gt;'s dependency tree
  tool Run and install commands provided by Python packages
  python Manage Python versions and installations
  pip Manage Python packages with a pip-compatible interface
  venv Create a virtual environment
  build Build Python packages into source distributions and wheels
  publish Upload distributions to an index
  cache Manage uv'&lt;/span&gt;s cache
  self Manage the uv executable
  version Display uv&lt;span class="s1"&gt;'s version
  help Display documentation for a command

Cache options:
  -n, --no-cache Avoid reading from or writing to the cache, instead using a temporary directory for the duration of the operation [env: UV_NO_CACHE=]
      --cache-dir &amp;lt;CACHE_DIR&amp;gt; Path to the cache directory [env: UV_CACHE_DIR=]

Python options:
      --python-preference &amp;lt;PYTHON_PREFERENCE&amp;gt; Whether to prefer uv-managed or system Python installations [env: UV_PYTHON_PREFERENCE=] [possible values: only-managed, managed, system,
                                               only-system]
      --no-python-downloads Disable automatic downloads of Python. [env: "UV_PYTHON_DOWNLOADS=never"]

Global options:
  -q, --quiet Do not print any output
  -v, --verbose... Use verbose output
      --color &amp;lt;COLOR_CHOICE&amp;gt; Control colors in output [default: auto] [possible values: auto, always, never]
      --native-tls Whether to load TLS certificates from the platform'&lt;/span&gt;s native certificate store &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;: &lt;span class="nv"&gt;UV_NATIVE_TLS&lt;/span&gt;&lt;span class="o"&gt;=]&lt;/span&gt;
      &lt;span class="nt"&gt;--offline&lt;/span&gt; Disable network access &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;: &lt;span class="nv"&gt;UV_OFFLINE&lt;/span&gt;&lt;span class="o"&gt;=]&lt;/span&gt;
      &lt;span class="nt"&gt;--allow-insecure-host&lt;/span&gt; &amp;lt;ALLOW_INSECURE_HOST&amp;gt; Allow insecure connections to a host &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;: &lt;span class="nv"&gt;UV_INSECURE_HOST&lt;/span&gt;&lt;span class="o"&gt;=]&lt;/span&gt;
      &lt;span class="nt"&gt;--no-progress&lt;/span&gt; Hide all progress outputs &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;: &lt;span class="nv"&gt;UV_NO_PROGRESS&lt;/span&gt;&lt;span class="o"&gt;=]&lt;/span&gt;
      &lt;span class="nt"&gt;--directory&lt;/span&gt; &amp;lt;DIRECTORY&amp;gt; Change to the given directory prior to running the &lt;span class="nb"&gt;command&lt;/span&gt;
      &lt;span class="nt"&gt;--project&lt;/span&gt; &amp;lt;PROJECT&amp;gt; Run the &lt;span class="nb"&gt;command &lt;/span&gt;within the given project directory
      &lt;span class="nt"&gt;--config-file&lt;/span&gt; &amp;lt;CONFIG_FILE&amp;gt; The path to a &lt;span class="sb"&gt;`&lt;/span&gt;uv.toml&lt;span class="sb"&gt;`&lt;/span&gt; file to use &lt;span class="k"&gt;for &lt;/span&gt;configuration &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;: &lt;span class="nv"&gt;UV_CONFIG_FILE&lt;/span&gt;&lt;span class="o"&gt;=]&lt;/span&gt;
      &lt;span class="nt"&gt;--no-config&lt;/span&gt; Avoid discovering configuration files &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;pyproject.toml&lt;span class="sb"&gt;`&lt;/span&gt;, &lt;span class="sb"&gt;`&lt;/span&gt;uv.toml&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;: &lt;span class="nv"&gt;UV_NO_CONFIG&lt;/span&gt;&lt;span class="o"&gt;=]&lt;/span&gt;
  &lt;span class="nt"&gt;-h&lt;/span&gt;, &lt;span class="nt"&gt;--help&lt;/span&gt; Display the concise &lt;span class="nb"&gt;help &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;this &lt;span class="nb"&gt;command&lt;/span&gt;
  &lt;span class="nt"&gt;-V&lt;/span&gt;, &lt;span class="nt"&gt;--version&lt;/span&gt; Display the uv version

Use &lt;span class="sb"&gt;`&lt;/span&gt;uv &lt;span class="nb"&gt;help&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;more details.

root@kubeai:~# uv self update
info: Checking &lt;span class="k"&gt;for &lt;/span&gt;updates...
success: You&lt;span class="s1"&gt;'re on the latest version of uv (v0.5.13)
root@kubeai:~# uv venv weaviate
Using CPython 3.12.3 interpreter at: /usr/bin/python3
Creating virtual environment at: weaviate
Activate with: source weaviate/bin/activate

root@kubeai:~# source weaviate/bin/activate
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Installation du client Python pour Weaviate à cette étape :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;weaviate&lt;span class="o"&gt;)&lt;/span&gt; root@kubeai:~# uv pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-U&lt;/span&gt; weaviate-client requests
Using Python 3.12.3 environment at: weaviate
Resolved 25 packages &lt;span class="k"&gt;in &lt;/span&gt;382ms
Prepared 25 packages &lt;span class="k"&gt;in &lt;/span&gt;252ms
Installed 25 packages &lt;span class="k"&gt;in &lt;/span&gt;11ms
 + annotated-types&lt;span class="o"&gt;==&lt;/span&gt;0.7.0
 + &lt;span class="nv"&gt;anyio&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;4.7.0
 + &lt;span class="nv"&gt;authlib&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;1.3.1
 + &lt;span class="nv"&gt;certifi&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;2024.12.14
 + &lt;span class="nv"&gt;cffi&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;1.17.1
 + charset-normalizer&lt;span class="o"&gt;==&lt;/span&gt;3.4.1
 + &lt;span class="nv"&gt;cryptography&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;44.0.0
 + &lt;span class="nv"&gt;grpcio&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;1.68.1
 + grpcio-health-checking&lt;span class="o"&gt;==&lt;/span&gt;1.68.1
 + grpcio-tools&lt;span class="o"&gt;==&lt;/span&gt;1.68.1
 + &lt;span class="nv"&gt;h11&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;0.14.0
 + &lt;span class="nv"&gt;httpcore&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;1.0.7
 + &lt;span class="nv"&gt;httpx&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;0.28.1
 + &lt;span class="nv"&gt;idna&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;3.10
 + &lt;span class="nv"&gt;protobuf&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;5.29.2
 + &lt;span class="nv"&gt;pycparser&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;2.22
 + &lt;span class="nv"&gt;pydantic&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;2.10.4
 + pydantic-core&lt;span class="o"&gt;==&lt;/span&gt;2.27.2
 + &lt;span class="nv"&gt;requests&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;2.32.3
 + &lt;span class="nv"&gt;setuptools&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;75.6.0
 + &lt;span class="nv"&gt;sniffio&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;1.3.1
 + typing-extensions&lt;span class="o"&gt;==&lt;/span&gt;4.12.2
 + &lt;span class="nv"&gt;urllib3&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;2.3.0
 + &lt;span class="nv"&gt;validators&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;0.34.0
 + weaviate-client&lt;span class="o"&gt;==&lt;/span&gt;4.10.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Weaviate n’est pas exposé publiquement dans cette configuration. Je configure des redirections de port locales pour accéder aux services Weaviate :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;weaviate&lt;span class="o"&gt;)&lt;/span&gt; root@kubeai:~# screen &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="nt"&gt;-S&lt;/span&gt; weaviate-http
&lt;span class="o"&gt;[&lt;/span&gt;detached from 11693.weaviate-http]
&lt;span class="o"&gt;(&lt;/span&gt;weaviate&lt;span class="o"&gt;)&lt;/span&gt; root@kubeai:~# screen &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="nt"&gt;-S&lt;/span&gt; weaviate-grpc
&lt;span class="o"&gt;[&lt;/span&gt;detached from 11996.weaviate-grpc]
&lt;span class="o"&gt;(&lt;/span&gt;weaviate&lt;span class="o"&gt;)&lt;/span&gt; root@kubeai:~# &lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; screenlog.0
root@kubeai:~# kubectl port-forward svc/weaviate 8080:80 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; kubectl port-forward svc/weaviate-grpc 50051:50051
Forwarding from 127.0.0.1:8080 -&amp;gt; 8080
Forwarding from &lt;span class="o"&gt;[&lt;/span&gt;::1]:8080 -&amp;gt; 8080
root@kubeai:~# kubectl port-forward svc/weaviate-grpc 50051:50051
Forwarding from 127.0.0.1:50051 -&amp;gt; 50051
Forwarding from &lt;span class="o"&gt;[&lt;/span&gt;::1]:50051 -&amp;gt; 50051

&lt;span class="o"&gt;(&lt;/span&gt;weaviate&lt;span class="o"&gt;)&lt;/span&gt; root@kubeai:~# netstat &lt;span class="nt"&gt;-tunlp&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;kubectl
tcp 0 0 127.0.0.1:8080 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 11710/kubectl       
tcp 0 0 127.0.0.1:50051 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 12018/kubectl       
tcp6 0 0 ::1:8080 :::&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 11710/kubectl       
tcp6 0 0 ::1:50051 :::&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 12018/kubectl  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;J’aborde le cas d’une recherche sémantique à l’aide d’un modèle d’intégration avec collecte et importation de données. Pour cela création et exécution du fichier Python suivant :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;weaviate&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="nd"&gt;@kubeai&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;~&lt;/span&gt;&lt;span class="c1"&gt;# cat create-collection.py 
&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;weaviate&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;weaviate.classes.config&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Configure&lt;/span&gt;

&lt;span class="c1"&gt;# This works due to port forward in previous step
&lt;/span&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;weaviate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect_to_local&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;grpc_port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;50051&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

    &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;collections&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Question&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;vectorizer_config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;Configure&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Vectorizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text2vec_openai&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text-embedding-ada-002&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://kubeai/openai&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;generative_config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;Configure&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Generative&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-3.5-turbo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://kubeai/openai&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# import data
&lt;/span&gt;    &lt;span class="n"&gt;resp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://raw.githubusercontent.com/weaviate-tutorials/quickstart/main/data/jeopardy_tiny.json&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Load data
&lt;/span&gt;
    &lt;span class="n"&gt;question_objs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;list&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;question_objs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;answer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Answer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;question&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Question&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;category&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Category&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="n"&gt;questions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;collections&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Question&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;questions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;insert_many&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;question_objs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Data imported successfully&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;weaviate&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="nd"&gt;@kubeai&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;~&lt;/span&gt;&lt;span class="c1"&gt;# python create-collection.py
&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Data imported successfully&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On se base sur le petit fichier JSON (questions/réponses sur le jeu Jeopardy).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[{"Category":"SCIENCE","Question":"This organ removes excess glucose from the blood &amp;amp; stores it as glycogen","Answer":"Liver"},{"Category":"ANIMALS","Question":"It's the only living mammal in the order Proboseidea","Answer":"Elephant"},{"Category":"ANIMALS","Question":"The gavial looks very much like a crocodile except for this bodily feature","Answer":"the nose or snout"},{"Category":"ANIMALS","Question":"Weighing around a ton, the eland is the largest species of this animal in Africa","Answer":"Antelope"},{"Category":"ANIMALS","Question":"Heaviest of all poisonous snakes is this North American rattlesnake","Answer":"the diamondback rattler"},{"Category":"SCIENCE","Question":"2000 news: the Gunnison sage grouse isn't just another northern sage grouse, but a new one of this classification","Answer":"species"},{"Category":"SCIENCE","Question":"A metal that is ductile can be pulled into this while cold &amp;amp; under pressure","Answer":"wire"},{"Category":"SCIENCE","Question":"In 1953 Watson &amp;amp; Crick built a model of the molecular structure of this, the gene-carrying substance","Answer":"DNA"},{"Category":"SCIENCE","Question":"Changes in the tropospheric layer of this are what gives us weather","Answer":"the atmosphere"},{"Category":"SCIENCE","Question":"In 70-degree air, a plane traveling at about 1,130 feet per second breaks it","Answer":"Sound barrier"}]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;La collection est maintenant créée et les données sont importées. Les vecteurs sont générés par KubeAI et stockés dans Weaviate. Je peux effectuer une recherche sémantique en lien avec le terme “biology” qui utilise les intégrations via la création de ce fichier Python :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;weaviate&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="nd"&gt;@kubeai&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;~&lt;/span&gt;&lt;span class="c1"&gt;# cat search.py 
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;weaviate&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;weaviate.classes.config&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Configure&lt;/span&gt;

&lt;span class="c1"&gt;# This works due to port forward in previous step
&lt;/span&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;weaviate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect_to_local&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;grpc_port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;50051&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;questions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;collections&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Question&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;questions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;near_text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;biology&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;limit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;objects&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;properties&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Inspect the first object
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;avec la réponse suivante :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;weaviate&lt;span class="o"&gt;)&lt;/span&gt; root@kubeai:~# python search.py
&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s1"&gt;'answer'&lt;/span&gt;: &lt;span class="s1"&gt;'DNA'&lt;/span&gt;, &lt;span class="s1"&gt;'question'&lt;/span&gt;: &lt;span class="s1"&gt;'In 1953 Watson &amp;amp; Crick built a model of the molecular structure of this, the gene-carrying substance'&lt;/span&gt;, &lt;span class="s1"&gt;'category'&lt;/span&gt;: &lt;span class="s1"&gt;'SCIENCE'&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Je peux modifier ma recherche sémantique avec le terme “poison” par exemple :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;weaviate&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="nd"&gt;@kubeai&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;~&lt;/span&gt;&lt;span class="c1"&gt;# cat search.py 
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;weaviate&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;weaviate.classes.config&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Configure&lt;/span&gt;

&lt;span class="c1"&gt;# This works due to port forward in previous step
&lt;/span&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;weaviate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect_to_local&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;grpc_port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;50051&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;questions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;collections&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Question&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;questions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;near_text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;poison&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;limit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;objects&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;properties&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Inspect the first object
&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;weaviate&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="nd"&gt;@kubeai&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;~&lt;/span&gt;&lt;span class="c1"&gt;# python search.py
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;answer&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;the diamondback rattler&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;question&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Heaviest of all poisonous snakes is this North American rattlesnake&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;category&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ANIMALS&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Passons maintenant à la recherche générative, qui utilise le modèle génératif (Génération de texte LLM). Le modèle génératif est exécuté localement et géré par KubeAI. Création d’un fichier nommé generate.pyavec le contenu suivant pour genérer un tweet avec des emojis en lien avec la recherche sémantique précédente (et en utilisant les redirections de port locales) :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;weaviate&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="nd"&gt;@kubeai&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;~&lt;/span&gt;&lt;span class="c1"&gt;# cat generate.py 
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;weaviate&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;weaviate.classes.config&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Configure&lt;/span&gt;

&lt;span class="c1"&gt;# This works due to port forward in previous step
&lt;/span&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;weaviate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect_to_local&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;grpc_port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;50051&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;questions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;collections&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Question&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;questions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;near_text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;biology&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;limit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;grouped_task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Write a tweet with emojis about these facts.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;generated&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Inspect the generated text
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;avec cette réponse …&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;weaviate&lt;span class="o"&gt;)&lt;/span&gt; root@kubeai:~# python generate.py
🧬 &lt;span class="k"&gt;**&lt;/span&gt;Watson &amp;amp; Crick&lt;span class="k"&gt;**&lt;/span&gt; cracked the code &lt;span class="k"&gt;in &lt;/span&gt;1953! 🤯 They built a model of DNA, the blueprint of life. 🧬

🧠 &lt;span class="k"&gt;**&lt;/span&gt;Liver power!&lt;span class="k"&gt;**&lt;/span&gt; 💪 This organ keeps your blood sugar balanced by storing glucose as glycogen. 🩸 &lt;span class="c"&gt;#ScienceFacts #Biology&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrzx8wa8cqfcqb344owv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrzx8wa8cqfcqb344owv.jpg" width="688" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;L’importation de données a permis d’effectuer des recherches et à générer du contenu à l’aide de modèles gérés par KubeAI. Il est possible de l’utiliser avec LangChain qui facilite la création d’applications basées sur des LLM :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://python.langchain.com/docs/introduction/" rel="noopener noreferrer"&gt;Introduction | 🦜️🔗 LangChain&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LangChain peut en effet interagir avec l’API compatible OpenAI de KubeAI. Installation du client Python pour LangChain avec &lt;strong&gt;uv&lt;/strong&gt;  :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;weaviate&lt;span class="o"&gt;)&lt;/span&gt; root@kubeai:~# uv pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-U&lt;/span&gt; langchain_openai
Using Python 3.12.3 environment at: weaviate
Resolved 30 packages &lt;span class="k"&gt;in &lt;/span&gt;1.32s
Prepared 16 packages &lt;span class="k"&gt;in &lt;/span&gt;93ms
Installed 16 packages &lt;span class="k"&gt;in &lt;/span&gt;7ms
 + &lt;span class="nv"&gt;distro&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;1.9.0
 + &lt;span class="nv"&gt;jiter&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;0.8.2
 + &lt;span class="nv"&gt;jsonpatch&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;1.33
 + &lt;span class="nv"&gt;jsonpointer&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;3.0.0
 + langchain-core&lt;span class="o"&gt;==&lt;/span&gt;0.3.28
 + langchain-openai&lt;span class="o"&gt;==&lt;/span&gt;0.2.14
 + &lt;span class="nv"&gt;langsmith&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;0.2.6
 + &lt;span class="nv"&gt;openai&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;1.58.1
 + &lt;span class="nv"&gt;orjson&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;3.10.13
 + &lt;span class="nv"&gt;packaging&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;24.2
 + &lt;span class="nv"&gt;pyyaml&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;6.0.2
 + &lt;span class="nv"&gt;regex&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;2024.11.6
 + requests-toolbelt&lt;span class="o"&gt;==&lt;/span&gt;1.0.0
 + &lt;span class="nv"&gt;tenacity&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;9.0.0
 + &lt;span class="nv"&gt;tiktoken&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;0.8.0
 + &lt;span class="nv"&gt;tqdm&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;4.67.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Accession locale à l’API compatible KubeAI OpenAI pour faciliter la tâche via un transfert de port vers le service KubeAI :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;weaviate&lt;span class="o"&gt;)&lt;/span&gt; root@kubeai:~# screen &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="nt"&gt;-S&lt;/span&gt; kubeai
&lt;span class="o"&gt;[&lt;/span&gt;detached from 15501.kubeai]
&lt;span class="o"&gt;(&lt;/span&gt;weaviate&lt;span class="o"&gt;)&lt;/span&gt; root@kubeai:~# &lt;span class="nb"&gt;cat &lt;/span&gt;screenlog.0 
root@kubeai:~# kubectl port-forward svc/kubeai 8000:80
Forwarding from 127.0.0.1:8000 -&amp;gt; 8000
Forwarding from &lt;span class="o"&gt;[&lt;/span&gt;::1]:8000 -&amp;gt; 8000
&lt;span class="o"&gt;(&lt;/span&gt;weaviate&lt;span class="o"&gt;)&lt;/span&gt; root@kubeai:~# netstat &lt;span class="nt"&gt;-tunlp&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;kubectl
tcp 0 0 127.0.0.1:8000 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 15511/kubectl       
tcp6 0 0 ::1:8000 :::&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 15511/kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Création d’un fichier Python simple qui utilise LangChain et connecté à KubeAI (c’est cette interface compatible avec OpenAI qui est consommée au lieu de l’API publique OpenAI par défaut) :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;weaviate&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="nd"&gt;@kubeai&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;~&lt;/span&gt;&lt;span class="c1"&gt;# cat test-langchain.py 
&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;

&lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-3.5-turbo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;max_tokens&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;max_retries&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;thisIsIgnored&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:8000/openai/v1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;messages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a helpful assistant in cloud native technologies&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;How can i create a kubernetes cluster with k3s ?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;ai_msg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ai_msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Je pose une question au LLM embarqué via Ollama sur la création d’un cluster k3s avec cette réponse :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;weaviate&lt;span class="o"&gt;)&lt;/span&gt; root@kubeai:~# python test-langchain.py

Let&lt;span class="s1"&gt;'s get you started with creating a Kubernetes cluster using K3s! 

**What is K3s?**

K3s is a lightweight, production-ready distribution of Kubernetes designed for simplicity and efficiency. It'&lt;/span&gt;s ideal &lt;span class="k"&gt;for&lt;/span&gt;:

&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;**&lt;/span&gt;Small to medium-sized deployments:&lt;span class="k"&gt;**&lt;/span&gt; It excels &lt;span class="k"&gt;in &lt;/span&gt;environments where resource constraints are a factor.
&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;**&lt;/span&gt;Edge computing:&lt;span class="k"&gt;**&lt;/span&gt; Deploying applications on devices with limited resources.
&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;**&lt;/span&gt;DevOps teams:&lt;span class="k"&gt;**&lt;/span&gt; Its ease of use makes it perfect &lt;span class="k"&gt;for &lt;/span&gt;rapid prototyping and deployment. 

&lt;span class="k"&gt;**&lt;/span&gt;Steps to Create a K3s Cluster&lt;span class="k"&gt;**&lt;/span&gt;

Here&lt;span class="s1"&gt;'s a breakdown of the process, along with explanations:

1. **Prerequisites:**
   * **Hardware:** You'&lt;/span&gt;ll need a machine &lt;span class="o"&gt;(&lt;/span&gt;physical or virtual&lt;span class="o"&gt;)&lt;/span&gt; capable of running Kubernetes. K3s is designed &lt;span class="k"&gt;for &lt;/span&gt;low-resource environments. 
   &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;**&lt;/span&gt;Networking:&lt;span class="k"&gt;**&lt;/span&gt; Ensure your machine has network connectivity and can reach the internet.
   &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;**&lt;/span&gt;Basic Linux knowledge:&lt;span class="k"&gt;**&lt;/span&gt; Familiarity with command-line tools like &lt;span class="sb"&gt;`&lt;/span&gt;curl&lt;span class="sb"&gt;`&lt;/span&gt;, &lt;span class="sb"&gt;`&lt;/span&gt;wget&lt;span class="sb"&gt;`&lt;/span&gt;, and basic terminal navigation will be helpful.

2. &lt;span class="k"&gt;**&lt;/span&gt;Installation:&lt;span class="k"&gt;**&lt;/span&gt;
   &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;**&lt;/span&gt;Download K3s:&lt;span class="k"&gt;**&lt;/span&gt; Visit the official K3s website &lt;span class="o"&gt;([&lt;/span&gt;https://k3s.io/]&lt;span class="o"&gt;(&lt;/span&gt;https://k3s.io/&lt;span class="o"&gt;))&lt;/span&gt; to download the latest version of K3s &lt;span class="k"&gt;for &lt;/span&gt;your operating system &lt;span class="o"&gt;(&lt;/span&gt;e.g., Linux, macOS&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; 
   &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;**&lt;/span&gt;Installation:&lt;span class="k"&gt;**&lt;/span&gt; Follow the instructions provided &lt;span class="k"&gt;in &lt;/span&gt;the K3s documentation to &lt;span class="nb"&gt;install &lt;/span&gt;K3s on your machine. You&lt;span class="s1"&gt;'ll likely need to use a terminal or command-line interface.

3. **Cluster Configuration:**
   * **Networking:** Configure your network settings for your K3s cluster (e.g., IP addresses, subnet masks). 
   * **Storage:** Decide how you want to store data within your cluster (e.g., local disk, persistent volumes).
   * **Security:** Implement security measures like TLS certificates and firewall rules if needed.

4. **Initialization:**
   * **Start the K3s Cluster:** Use the `k3s` command-line tool to start the cluster. 
   * **Access the Dashboard (Optional):** If you'&lt;/span&gt;re using a dashboard, follow the instructions provided by your chosen dashboard provider.

5. &lt;span class="k"&gt;**&lt;/span&gt;Deploying Applications:&lt;span class="k"&gt;**&lt;/span&gt;
   &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;**&lt;/span&gt;kubectl:&lt;span class="k"&gt;**&lt;/span&gt; Use the &lt;span class="sb"&gt;`&lt;/span&gt;kubectl&lt;span class="sb"&gt;`&lt;/span&gt; command-line tool to interact with your K3s cluster and deploy applications. 
   &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;**&lt;/span&gt;YAML Configuration:&lt;span class="k"&gt;**&lt;/span&gt; Create Kubernetes YAML files &lt;span class="o"&gt;(&lt;/span&gt;configuration files&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;your desired application deployments. 

&lt;span class="k"&gt;**&lt;/span&gt;Example Commands:&lt;span class="k"&gt;**&lt;/span&gt;


&lt;span class="c"&gt;# Install K3s on Ubuntu&lt;/span&gt;
curl &lt;span class="nt"&gt;-LO&lt;/span&gt; https://get.k3s.io/install.sh
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x install.sh
./install.sh

&lt;span class="c"&gt;# Start the cluster&lt;/span&gt;
k3s start


&lt;span class="k"&gt;**&lt;/span&gt;Key Advantages of K3s:&lt;span class="k"&gt;**&lt;/span&gt;

&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;**&lt;/span&gt;Simplicity:&lt;span class="k"&gt;**&lt;/span&gt; K3s is designed &lt;span class="k"&gt;for &lt;/span&gt;ease of use, making it a great choice &lt;span class="k"&gt;for &lt;/span&gt;beginners and experienced &lt;span class="nb"&gt;users &lt;/span&gt;alike. 
&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;**&lt;/span&gt;Lightweight:&lt;span class="k"&gt;**&lt;/span&gt; It&lt;span class="s1"&gt;'s incredibly efficient, requiring minimal resources to run. This makes it ideal for edge deployments or environments with limited hardware.
* **Fast Deployment:** K3s offers quick cluster setup times, allowing you to get your applications running faster.

**Additional Resources:**

* **K3s Website:** [https://k3s.io/](https://k3s.io/) 
* **K3s Documentation:** [https://docs.k3s.io/](https://docs.k3s.io/)

Let me know if you have any specific questions or want to dive deeper into a particular aspect of K3s!
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkg4i73fdzq055d8rat21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkg4i73fdzq055d8rat21.png" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pour plus de visibiliter, j’insère Rancher Server sur ce cluster k3s avec les commandes suivantes :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add rancher-latest https://releases.rancher.com/server-charts/latest

kubectl create namespace cattle-system

kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://github.com/cert-manager/cert-manager/releases/download/v1.16.2/cert-manager.crds.yaml

helm repo add jetstack https://charts.jetstack.io

helm repo update

helm &lt;span class="nb"&gt;install &lt;/span&gt;cert-manager jetstack/cert-manager &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; cert-manager &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--create-namespace&lt;/span&gt;

&lt;span class="s2"&gt;"rancher-latest"&lt;/span&gt; has been added to your repositories
namespace/cattle-system created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
&lt;span class="s2"&gt;"jetstack"&lt;/span&gt; has been added to your repositories
Hang tight &lt;span class="k"&gt;while &lt;/span&gt;we grab the latest from your chart repositories...
...Successfully got an update from the &lt;span class="s2"&gt;"weaviate"&lt;/span&gt; chart repository
...Successfully got an update from the &lt;span class="s2"&gt;"kubeai"&lt;/span&gt; chart repository
...Successfully got an update from the &lt;span class="s2"&gt;"jetstack"&lt;/span&gt; chart repository
...Successfully got an update from the &lt;span class="s2"&gt;"rancher-latest"&lt;/span&gt; chart repository
Update Complete. ⎈Happy Helming!⎈
NAME: cert-manager
LAST DEPLOYED: Mon Dec 30 15:00:45 2024
NAMESPACE: cert-manager
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
cert-manager v1.16.2 has been deployed successfully!

In order to begin issuing certificates, you will need to &lt;span class="nb"&gt;set &lt;/span&gt;up a ClusterIssuer
or Issuer resource &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;example, by creating a &lt;span class="s1"&gt;'letsencrypt-staging'&lt;/span&gt; issuer&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;

More information on the different types of issuers and how to configure them
can be found &lt;span class="k"&gt;in &lt;/span&gt;our documentation:

https://cert-manager.io/docs/configuration/

For information on how to configure cert-manager to automatically provision
Certificates &lt;span class="k"&gt;for &lt;/span&gt;Ingress resources, take a look at the &lt;span class="sb"&gt;`&lt;/span&gt;ingress-shim&lt;span class="sb"&gt;`&lt;/span&gt;
documentation:

https://cert-manager.io/docs/usage/ingress/


&lt;span class="o"&gt;(&lt;/span&gt;weaviate&lt;span class="o"&gt;)&lt;/span&gt; root@kubeai:~# helm &lt;span class="nb"&gt;install &lt;/span&gt;rancher rancher-latest/rancher &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; cattle-system &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nb"&gt;hostname&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rancher.164.92.248.129.sslip.io &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;bootstrapPassword&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nochangeme
NAME: rancher
LAST DEPLOYED: Mon Dec 30 15:01:59 2024
NAMESPACE: cattle-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Rancher Server has been installed.

NOTE: Rancher may take several minutes to fully initialize. Please standby &lt;span class="k"&gt;while &lt;/span&gt;Certificates are being issued, Containers are started and the Ingress rule comes up.

Check out our docs at https://rancher.com/docs/

If you provided your own bootstrap password during installation, browse to https://rancher.164.92.248.129.sslip.io to get started.

If this is the first &lt;span class="nb"&gt;time &lt;/span&gt;you installed Rancher, get started by running this &lt;span class="nb"&gt;command &lt;/span&gt;and clicking the URL it generates:


&lt;span class="nb"&gt;echo &lt;/span&gt;https://rancher.164.92.248.129.sslip.io/dashboard/?setup&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get secret &lt;span class="nt"&gt;--namespace&lt;/span&gt; cattle-system bootstrap-secret &lt;span class="nt"&gt;-o&lt;/span&gt; go-template&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{{.data.bootstrapPassword|base64decode}}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;


To get just the bootstrap password on its own, run:


kubectl get secret &lt;span class="nt"&gt;--namespace&lt;/span&gt; cattle-system bootstrap-secret &lt;span class="nt"&gt;-o&lt;/span&gt; go-template&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{{.data.bootstrapPassword|base64decode}}{{ "\n" }}'&lt;/span&gt;


Happy Containering!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8sm3x9rm8xv55zq9beue.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8sm3x9rm8xv55zq9beue.png" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Je modifie le fichier YAML pour KubeAI pour bénéficier d’un accès à Ollama :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;root@kubeai:~# cat &amp;lt;&amp;lt;EOF &amp;gt; models-helm-values.yaml&lt;/span&gt;
&lt;span class="na"&gt;catalog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;gemma2-2b-cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;minReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="s"&gt;EOF&lt;/span&gt;

&lt;span class="s"&gt;root@kubeai:~# helm upgrade kubeai-models kubeai/models \&lt;/span&gt;
    &lt;span class="s"&gt;-f ./models-helm-values.yaml&lt;/span&gt;

&lt;span class="s"&gt;Release "kubeai-models" has been upgraded. Happy Helming!&lt;/span&gt;
&lt;span class="na"&gt;NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubeai-models&lt;/span&gt;
&lt;span class="na"&gt;LAST DEPLOYED&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Mon Dec 30 15:14:49 &lt;/span&gt;&lt;span class="m"&gt;2024&lt;/span&gt;
&lt;span class="na"&gt;NAMESPACE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;STATUS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deployed&lt;/span&gt;
&lt;span class="na"&gt;REVISION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
&lt;span class="na"&gt;TEST SUITE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;None&lt;/span&gt;
&lt;span class="s"&gt;root@kubeai:~# helm ls&lt;/span&gt;
&lt;span class="s"&gt;NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION&lt;/span&gt;
&lt;span class="s"&gt;kubeai default 1 2024-12-30 13:47:00.713372779 +0000 UTC deployed kubeai-0.10.0 v0.12.0&lt;/span&gt;    
&lt;span class="s"&gt;kubeai-models default 2 2024-12-30 15:14:49.154697606 +0000 UTC deployed models-0.10.0 1.16.0&lt;/span&gt;     
&lt;span class="s"&gt;weaviate default 1 2024-12-30 13:51:48.357464607 +0000 UTC deployed weaviate-17.3.3 1.27.8&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Avec de nouveau une redirection locale de port pour accéder à Open WebUI (intégré à KubeAI), une interface utilisateur extensible, riche en fonctionnalités et conviviale, conçue pour fonctionner entièrement hors ligne. Elle prend en charge différents runners LLM, y compris Ollama et les API compatibles avec OpenAI :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;weaviate&lt;span class="o"&gt;)&lt;/span&gt; root@kubeai:~# kubectl port-forward service/openwebui 10000:80 &lt;span class="nt"&gt;--address&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'0.0.0.0'&lt;/span&gt;
Forwarding from 0.0.0.0:10000 -&amp;gt; 8080
Handling connection &lt;span class="k"&gt;for &lt;/span&gt;10000
Handling connection &lt;span class="k"&gt;for &lt;/span&gt;10000
Handling connection &lt;span class="k"&gt;for &lt;/span&gt;10000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fattiyx4uciibdl19vjow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fattiyx4uciibdl19vjow.png" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4m1arrwafi8hxfhts00s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4m1arrwafi8hxfhts00s.png" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Accession à l’interface utilisateur d’Open WebUI :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wf5gifjmbpms4lidpi4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wf5gifjmbpms4lidpi4.png" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn90dhpi638y0wjtg48i4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn90dhpi638y0wjtg48i4.png" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;C’est ce petit LLM utilisé précedemment qui est encore mis en oeuvre ici :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwiujszyk0artcaz8pkt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwiujszyk0artcaz8pkt.png" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;J’effectue alors une recherche sur la base d’un document PDF extrait de la documentation officielle de k3s via cette interface …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzw2vrf8h3jtxbt51d0fp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzw2vrf8h3jtxbt51d0fp.png" width="800" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fklxp3f4y3odwtj74685w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fklxp3f4y3odwtj74685w.png" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyp218jvkukxst4mfveg0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyp218jvkukxst4mfveg0.png" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dans cet article, j’ai utiliser uniquement des modèles CPU (il devrait donc fonctionner même sur votre ordinateur portable) qui fait que KubeAI peut fonctionner sur votre matériel existant, réduisant ainsi le besoin de payer pour des intégrations et des modèles génératifs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;KubeAI s’exécute localement dans votre cluster Kubernetes, de sorte que vos données ne quittent jamais votre infrastructure. Vous pouvez facilement changer ou mettre à jour les modèles utilisés sans modifier votre code applicatif et KubeAI permet une mise à l’échelle automatique des modèles en fonction de la charge.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffamolmuic9b22p2hck98.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffamolmuic9b22p2hck98.jpg" width="500" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;En conclusion, on a vu que l’utilisation combinée de KubeAI et Weaviate offre une solution robuste et flexible pour implémenter le RAG de manière privée et évolutive. Cette approche permet aux entreprises de bénéficier des avancées de l’IA tout en gardant le contrôle sur leurs données et leur infrastructure …&lt;/p&gt;

&lt;p&gt;À suivre !&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>generativeaitools</category>
      <category>ai</category>
    </item>
    <item>
      <title>AIOps : Déboguer son cluster Kubernetes en utilisant l’intelligence artificielle générative via…</title>
      <dc:creator>Karim</dc:creator>
      <pubDate>Sat, 28 Dec 2024 14:57:15 +0000</pubDate>
      <link>https://forem.com/deep75/aiops-deboguer-son-cluster-kubernetes-en-utilisant-lintelligence-artificielle-generative-via-52hc</link>
      <guid>https://forem.com/deep75/aiops-deboguer-son-cluster-kubernetes-en-utilisant-lintelligence-artificielle-generative-via-52hc</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkh7c5xhq3ppoc9vd6sc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkh7c5xhq3ppoc9vd6sc.png" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Les avancées dans l’Intelligence Artificielle Générative et sa mise en oeuvre simplifiée via certains outils, transforment profondément la gestion des clusters Kubernetes via ce concept d’ &lt;strong&gt;AIOps&lt;/strong&gt; (acronyme anglais de &lt;em&gt;Artificial Intelligence for IT Operations&lt;/em&gt;) énoncé par Gartner, un processus dans lequel vous utilisez des techniques d’intelligence artificielle (IA) pour maintenir par exemple une infrastructure. L’un des domaines dans lesquels les ingénieurs DevOps et les opérateurs de cluster débutants sont souvent confrontés, est relatif aux défis de l’identification, de la compréhension et de la résolution de problèmes au sein d’un cluster Kubernetes …&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://fr.wikipedia.org/wiki/AIOps" rel="noopener noreferrer"&gt;AIOps - Wikipédia&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/fr/what-is/aiops/" rel="noopener noreferrer"&gt;Qu'est-ce que l'intelligence artificielle pour les opérations informatiques (AIOps) ? - Explication de l'intelligence artificielle pour les opérations informatiques - AWS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fr.wikipedia.org/wiki/Intelligence_artificielle_g%C3%A9n%C3%A9rative" rel="noopener noreferrer"&gt;Intelligence artificielle générative - Wikipédia&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Dans cet article, nous allons donc explorer comment configurer et utiliser K8sGPT, un outil open source basé sur l’IA générative, en combinaison avec Ollama et le modèle Falcon3, pour identifier et résoudre les problèmes dans un cluster Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Farqjpdvdpmto52nxapf9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Farqjpdvdpmto52nxapf9.jpg" width="702" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pour cela, je pars ici d’une instance Ubuntu 24.04 LTS dans DigitalOcean :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zrqymvyql1hvx6kvipc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zrqymvyql1hvx6kvipc.png" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;En y installant localement le moteur Docker …&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://get.docker.com | sh -
&lt;span class="c"&gt;# Executing docker install script, commit: 4c94a56999e10efcf48c5b8e3f6afea464f9108e&lt;/span&gt;
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; apt-get &lt;span class="nt"&gt;-qq&lt;/span&gt; update &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nv"&gt;DEBIAN_FRONTEND&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;noninteractive apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;-qq&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;ca-certificates curl &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null
Scanning processes...                                                                                                                                                                         
Scanning candidates...                                                                                                                                                                        
Scanning linux images...                                                                                                                                                                      
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; 0755 &lt;span class="nt"&gt;-d&lt;/span&gt; /etc/apt/keyrings
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; &lt;span class="s2"&gt;"https://download.docker.com/linux/ubuntu/gpg"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/docker.asc
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nb"&gt;chmod &lt;/span&gt;a+r /etc/apt/keyrings/docker.asc
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu noble stable"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/apt/sources.list.d/docker.list
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; apt-get &lt;span class="nt"&gt;-qq&lt;/span&gt; update &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nv"&gt;DEBIAN_FRONTEND&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;noninteractive apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;-qq&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;docker-ce docker-ce-cli containerd.io docker-compose-plugin docker-ce-rootless-extras docker-buildx-plugin &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null
Scanning processes...                                                                                                                                                                         
Scanning candidates...                                                                                                                                                                        
Scanning linux images...                                                                                                                                                                      
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; docker version
Client: Docker Engine - Community
 Version: 27.4.1
 API version: 1.47
 Go version: go1.22.10
 Git commit: b9d17ea
 Built: Tue Dec 17 15:45:46 2024
 OS/Arch: linux/amd64
 Context: default

Server: Docker Engine - Community
 Engine:
  Version: 27.4.1
  API version: 1.47 &lt;span class="o"&gt;(&lt;/span&gt;minimum version 1.24&lt;span class="o"&gt;)&lt;/span&gt;
  Go version: go1.22.10
  Git commit: c710b88
  Built: Tue Dec 17 15:45:46 2024
  OS/Arch: linux/amd64
  Experimental: &lt;span class="nb"&gt;false
 &lt;/span&gt;containerd:
  Version: 1.7.24
  GitCommit: 88bf19b2105c8b17560993bee28a01ddc2f97182
 runc:
  Version: 1.2.2
  GitCommit: v1.2.2-0-g7cb3632
 docker-init:
  Version: 0.19.0
  GitCommit: de40ad0

&lt;span class="o"&gt;================================================================================&lt;/span&gt;

To run Docker as a non-privileged user, consider setting up the
Docker daemon &lt;span class="k"&gt;in &lt;/span&gt;rootless mode &lt;span class="k"&gt;for &lt;/span&gt;your user:

    dockerd-rootless-setuptool.sh &lt;span class="nb"&gt;install

&lt;/span&gt;Visit https://docs.docker.com/go/rootless/ to learn about rootless mode.

To run the Docker daemon as a fully privileged service, but granting non-root
&lt;span class="nb"&gt;users &lt;/span&gt;access, refer to https://docs.docker.com/go/daemon-access/

WARNING: Access to the remote API on a privileged Docker daemon is equivalent
         to root access on the host. Refer to the &lt;span class="s1"&gt;'Docker daemon attack surface'&lt;/span&gt;
         documentation &lt;span class="k"&gt;for &lt;/span&gt;details: https://docs.docker.com/go/attack-surface/

&lt;span class="o"&gt;================================================================================&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Et je suis capable d’y lancer directmeent Ollama via son image officiellle pour exécuter des modèles de langage de grande taille (LLM) localement en y utilisant les CPU premium d’Intel présents dans l’instance Ubuntu :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image" rel="noopener noreferrer"&gt;Ollama is now available as an official Docker image · Ollama Blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63otespo1eim2bc9f7sn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63otespo1eim2bc9f7sn.png" width="800" height="407"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; ollama:/root/.ollama &lt;span class="nt"&gt;-p&lt;/span&gt; 11434:11434 &lt;span class="nt"&gt;--name&lt;/span&gt; ollama ollama/ollama:latest
Unable to find image &lt;span class="s1"&gt;'ollama/ollama:latest'&lt;/span&gt; locally
latest: Pulling from ollama/ollama
6414378b6477: Pull &lt;span class="nb"&gt;complete 
&lt;/span&gt;9423a26b200c: Pull &lt;span class="nb"&gt;complete 
&lt;/span&gt;629da9618c4f: Pull &lt;span class="nb"&gt;complete 
&lt;/span&gt;00b71e3f044c: Pull &lt;span class="nb"&gt;complete 
&lt;/span&gt;Digest: sha256:18bfb1d605604fd53dcad20d0556df4c781e560ebebcd923454d627c994a0e37
Status: Downloaded newer image &lt;span class="k"&gt;for &lt;/span&gt;ollama/ollama:latest
7b09d9fcdacff4319e553c41f741a15266eb5a5ec745959363e7754c53a203ef

&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# docker ps &lt;span class="nt"&gt;-a&lt;/span&gt;
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b09d9fcdacf ollama/ollama:latest &lt;span class="s2"&gt;"/bin/ollama serve"&lt;/span&gt; About a minute ago Up About a minute 0.0.0.0:11434-&amp;gt;11434/tcp, :::11434-&amp;gt;11434/tcp ollama


&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# netstat &lt;span class="nt"&gt;-tunlp&lt;/span&gt;
Active Internet connections &lt;span class="o"&gt;(&lt;/span&gt;only servers&lt;span class="o"&gt;)&lt;/span&gt;
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name    
tcp 0 0 0.0.0.0:11434 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 46227/docker-proxy  
tcp 0 0 127.0.0.54:53 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 744/systemd-resolve 
tcp 0 0 127.0.0.53:53 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 744/systemd-resolve 
tcp6 0 0 :::22 :::&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 1/init              
tcp6 0 0 :::11434 :::&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 46235/docker-proxy  
udp 0 0 127.0.0.54:53 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; 744/systemd-resolve 
udp 0 0 127.0.0.53:53 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; 744/systemd-resolve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Et en y chargeant le grand modèle &lt;strong&gt;Falcon 3&lt;/strong&gt; développé par le Technology Innovation Institute (TII) d’Abu Dhab et présent dans la bibliothèque de grands modèles d’Ollama :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://huggingface.co/blog/falcon3" rel="noopener noreferrer"&gt;Welcome to the Falcon 3 Family of Open Models!&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ollama.com/library/falcon3" rel="noopener noreferrer"&gt;falcon3&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; ollama ollama pull falcon3
pulling manifest 
pulling 3717a52b7aea... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.6 GB                         
pulling 803b5adc3448... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 218 B                         
pulling 58f83c52a4e3... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 13 KB                         
pulling 35e31ed4c388... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 101 B                         
pulling acb75345e14b... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 487 B                         
verifying sha256 digest 
writing manifest 
success 
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; ollama ollama list
NAME ID SIZE MODIFIED       
falcon3:latest 472ea1c89f64 4.6 GB 11 seconds ago
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdj090473n9bzx20drlo9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdj090473n9bzx20drlo9.jpg" width="735" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Je peux à cette étape y charger K8sGPT, un outil pour scanner vos clusters kubernetes, diagnostiquer et trier les problèmes en anglais simple. Il dispose d’une expérience SRE codifiée dans ses analyseurs et aide à extraire les informations les plus pertinentes pour les enrichir avec de l’IA générative.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;K8sGPT fonctionne en trois étapes&lt;/strong&gt;  :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extraction : Récupération des détails de configuration de toutes les charges de travail déployées dans le cluster.&lt;/li&gt;
&lt;li&gt;Filtration : Un composant appelé « analyzer » filtre les données nécessaires.&lt;/li&gt;
&lt;li&gt;Génération : Les données filtrées sont traitées pour générer des insights et des rapports en anglais simple.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://k8sgpt.ai/" rel="noopener noreferrer"&gt;K8sGPT&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# curl &lt;span class="nt"&gt;-LO&lt;/span&gt; https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.48/k8sgpt_amd64.deb
  % Total % Received % Xferd Average Speed Time Time Time Current
                                 Dload Upload Total Spent Left Speed
  0 0 0 0 0 0 0 0 &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- 0
100 34.5M 100 34.5M 0 0 30.5M 0 0:00:01 0:00:01 &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- 30.5M
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# dpkg &lt;span class="nt"&gt;-i&lt;/span&gt; k8sgpt_amd64.deb 
&lt;span class="o"&gt;(&lt;/span&gt;Reading database ... 74447 files and directories currently installed.&lt;span class="o"&gt;)&lt;/span&gt;
Preparing to unpack k8sgpt_amd64.deb ...
Unpacking k8sgpt &lt;span class="o"&gt;(&lt;/span&gt;0.3.48&lt;span class="o"&gt;)&lt;/span&gt; over &lt;span class="o"&gt;(&lt;/span&gt;0.3.48&lt;span class="o"&gt;)&lt;/span&gt; ...
Setting up k8sgpt &lt;span class="o"&gt;(&lt;/span&gt;0.3.48&lt;span class="o"&gt;)&lt;/span&gt; ...

&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# k8sgpt
Kubernetes debugging powered by AI

Usage:
  k8sgpt &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;command&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;

Available Commands:
  analyze This &lt;span class="nb"&gt;command &lt;/span&gt;will find problems within your Kubernetes cluster
  auth Authenticate with your chosen backend
  cache For working with the cache the results of an analysis
  completion Generate the autocompletion script &lt;span class="k"&gt;for &lt;/span&gt;the specified shell
  custom-analyzer Manage a custom analyzer
  dump Creates a dumpfile &lt;span class="k"&gt;for &lt;/span&gt;debugging issues with K8sGPT
  filters Manage filters &lt;span class="k"&gt;for &lt;/span&gt;analyzing Kubernetes resources
  generate Generate Key &lt;span class="k"&gt;for &lt;/span&gt;your chosen backend &lt;span class="o"&gt;(&lt;/span&gt;opens browser&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;help &lt;/span&gt;Help about any &lt;span class="nb"&gt;command
  &lt;/span&gt;integration Integrate another tool into K8sGPT
  serve Runs k8sgpt as a server
  version Print the version number of k8sgpt

Flags:
      &lt;span class="nt"&gt;--config&lt;/span&gt; string Default config file &lt;span class="o"&gt;(&lt;/span&gt;/root/.config/k8sgpt/k8sgpt.yaml&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="nt"&gt;-h&lt;/span&gt;, &lt;span class="nt"&gt;--help&lt;/span&gt; &lt;span class="nb"&gt;help &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;k8sgpt
      &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt; string Path to a kubeconfig. Only required &lt;span class="k"&gt;if &lt;/span&gt;out-of-cluster.
      &lt;span class="nt"&gt;--kubecontext&lt;/span&gt; string Kubernetes context to use. Only required &lt;span class="k"&gt;if &lt;/span&gt;out-of-cluster.

Use &lt;span class="s2"&gt;"k8sgpt [command] --help"&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;more information about a command.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;La ligne de commande est alors disponible pour y vérifier les fournisseurs disponibles localement dans cette instance :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.k8sgpt.ai/getting-started/installation/" rel="noopener noreferrer"&gt;Installation&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# k8sgpt auth list
Default: 
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; openai
Active: 
Unused: 
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; openai
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; localai
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; ollama
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; azureopenai
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; cohere
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; amazonbedrock
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; amazonsagemaker
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; google
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; noopai
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; huggingface
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; googlevertexai
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; oci
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; ibmwatsonxai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ollama sera utilisé comme fournisseur de backend IA pour K8sGPT au travers de LocalAI (qui agit comme l’API REST de remplacement compatible avec les spécifications de l’API OpenAI pour une inférence locale).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.k8sgpt.ai/reference/providers/backend/" rel="noopener noreferrer"&gt;Overview - k8sgpt&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://localai.io/" rel="noopener noreferrer"&gt;Overview&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Voici la commande pour configurer K8sGPT avec Ollama et le modèle Falcon3 :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# k8sgpt auth add &lt;span class="nt"&gt;--backend&lt;/span&gt; localai &lt;span class="nt"&gt;--model&lt;/span&gt; falcon3 &lt;span class="nt"&gt;--baseurl&lt;/span&gt; http://localhost:11434/v1
localai added to the AI backend provider list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Je lance un cluster Kubernetes managé sur DigitalOcean via DigitalOcean Kubernetes (DOKS) :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.digitalocean.com/products/kubernetes" rel="noopener noreferrer"&gt;DigitalOcean Managed Kubernetes | Starting at $12/mo.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faos1tnx61pmo5peitq3c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faos1tnx61pmo5peitq3c.png" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2p475kg5u09w817llvsw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2p475kg5u09w817llvsw.png" width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Récupération du fichier &lt;strong&gt;&lt;em&gt;Kubeconfig&lt;/em&gt;&lt;/strong&gt; depuis ce cluster pour l’insérer localement sur l’instance Ubuntu pour l’utiliser avec le client &lt;strong&gt;&lt;em&gt;Kubectl&lt;/em&gt;&lt;/strong&gt;  :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak00u03odvqku9bwvhtr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak00u03odvqku9bwvhtr.png" width="800" height="528"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# curl &lt;span class="nt"&gt;-LO&lt;/span&gt; https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chmod&lt;/span&gt; +x ./kubectl &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;mv &lt;/span&gt;kubectl /usr/local/bin/ &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; kubectl
  % Total % Received % Xferd Average Speed Time Time Time Current
                                 Dload Upload Total Spent Left Speed
100 138 100 138 0 0 1000 0 &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- 1007
100 54.6M 100 54.6M 0 0 120M 0 &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- 120M
kubectl controls the Kubernetes cluster manager.

 Find more information at: https://kubernetes.io/docs/reference/kubectl/

Basic Commands &lt;span class="o"&gt;(&lt;/span&gt;Beginner&lt;span class="o"&gt;)&lt;/span&gt;:
  create Create a resource from a file or from stdin
  expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes service
  run Run a particular image on the cluster
  &lt;span class="nb"&gt;set &lt;/span&gt;Set specific features on objects

Basic Commands &lt;span class="o"&gt;(&lt;/span&gt;Intermediate&lt;span class="o"&gt;)&lt;/span&gt;:
  explain Get documentation &lt;span class="k"&gt;for &lt;/span&gt;a resource
  get Display one or many resources
  edit Edit a resource on the server
  delete Delete resources by file names, stdin, resources and names, or by resources and label selector

Deploy Commands:
  rollout Manage the rollout of a resource
  scale Set a new size &lt;span class="k"&gt;for &lt;/span&gt;a deployment, replica &lt;span class="nb"&gt;set&lt;/span&gt;, or replication controller
  autoscale Auto-scale a deployment, replica &lt;span class="nb"&gt;set&lt;/span&gt;, stateful &lt;span class="nb"&gt;set&lt;/span&gt;, or replication controller

Cluster Management Commands:
  certificate Modify certificate resources
  cluster-info Display cluster information
  top Display resource &lt;span class="o"&gt;(&lt;/span&gt;CPU/memory&lt;span class="o"&gt;)&lt;/span&gt; usage
  cordon Mark node as unschedulable
  uncordon Mark node as schedulable
  drain Drain node &lt;span class="k"&gt;in &lt;/span&gt;preparation &lt;span class="k"&gt;for &lt;/span&gt;maintenance
  taint Update the taints on one or more nodes

Troubleshooting and Debugging Commands:
  describe Show details of a specific resource or group of resources
  logs Print the logs &lt;span class="k"&gt;for &lt;/span&gt;a container &lt;span class="k"&gt;in &lt;/span&gt;a pod
  attach Attach to a running container
  &lt;span class="nb"&gt;exec &lt;/span&gt;Execute a &lt;span class="nb"&gt;command &lt;/span&gt;&lt;span class="k"&gt;in &lt;/span&gt;a container
  port-forward Forward one or more &lt;span class="nb"&gt;local &lt;/span&gt;ports to a pod
  proxy Run a proxy to the Kubernetes API server
  &lt;span class="nb"&gt;cp &lt;/span&gt;Copy files and directories to and from containers
  auth Inspect authorization
  debug Create debugging sessions &lt;span class="k"&gt;for &lt;/span&gt;troubleshooting workloads and nodes
  events List events

Advanced Commands:
  diff Diff the live version against a would-be applied version
  apply Apply a configuration to a resource by file name or stdin
  patch Update fields of a resource
  replace Replace a resource by file name or stdin
  &lt;span class="nb"&gt;wait &lt;/span&gt;Experimental: Wait &lt;span class="k"&gt;for &lt;/span&gt;a specific condition on one or many resources
  kustomize Build a kustomization target from a directory or URL

Settings Commands:
  label Update the labels on a resource
  annotate Update the annotations on a resource
  completion Output shell completion code &lt;span class="k"&gt;for &lt;/span&gt;the specified shell &lt;span class="o"&gt;(&lt;/span&gt;bash, zsh, fish, or powershell&lt;span class="o"&gt;)&lt;/span&gt;

Subcommands provided by plugins:

Other Commands:
  api-resources Print the supported API resources on the server
  api-versions Print the supported API versions on the server, &lt;span class="k"&gt;in &lt;/span&gt;the form of &lt;span class="s2"&gt;"group/version"&lt;/span&gt;
  config Modify kubeconfig files
  plugin Provides utilities &lt;span class="k"&gt;for &lt;/span&gt;interacting with plugins
  version Print the client and server version information

Usage:
  kubectl &lt;span class="o"&gt;[&lt;/span&gt;flags] &lt;span class="o"&gt;[&lt;/span&gt;options]

Use &lt;span class="s2"&gt;"kubectl &amp;lt;command&amp;gt; --help"&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;more information about a given command.
Use &lt;span class="s2"&gt;"kubectl options"&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;a list of global command-line options &lt;span class="o"&gt;(&lt;/span&gt;applies to all commands&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;

&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# kubectl cluster-info
Kubernetes control plane is running at https://738af175-32d4-43e9-9e31-b7ae3058be3e.k8s.ondigitalocean.com
CoreDNS is running at https://738af175-32d4-43e9-9e31-b7ae3058be3e.k8s.ondigitalocean.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use &lt;span class="s1"&gt;'kubectl cluster-info dump'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
pool-kuaxj3k47-ejvl3 Ready &amp;lt;none&amp;gt; 6m40s v1.31.1 10.110.0.3 159.65.200.52 Debian GNU/Linux 12 &lt;span class="o"&gt;(&lt;/span&gt;bookworm&lt;span class="o"&gt;)&lt;/span&gt; 6.1.0-27-amd64 containerd://1.6.31
pool-kuaxj3k47-ejvl8 Ready &amp;lt;none&amp;gt; 6m38s v1.31.1 10.110.0.2 164.92.147.176 Debian GNU/Linux 12 &lt;span class="o"&gt;(&lt;/span&gt;bookworm&lt;span class="o"&gt;)&lt;/span&gt; 6.1.0-27-amd64 containerd://1.6.31
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# kubectl get po,svc &lt;span class="nt"&gt;-A&lt;/span&gt;
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/cilium-4gxht 1/1 Running 0 6m46s
kube-system pod/cilium-lw8gd 1/1 Running 0 6m48s
kube-system pod/coredns-c5c6457c-bnzfc 0/1 Running 0 36s
kube-system pod/coredns-c5c6457c-nz6gr 0/1 Running 0 36s
kube-system pod/cpc-bridge-proxy-ebpf-7ncbq 1/1 Running 0 55s
kube-system pod/cpc-bridge-proxy-ebpf-qth8w 1/1 Running 0 55s
kube-system pod/hubble-relay-67597fb8-kmlw5 1/1 Running 1 &lt;span class="o"&gt;(&lt;/span&gt;51s ago&lt;span class="o"&gt;)&lt;/span&gt; 8m40s
kube-system pod/hubble-ui-79957d9f7b-4n9kj 2/2 Running 0 74s
kube-system pod/konnectivity-agent-7ml7p 1/1 Running 0 61s
kube-system pod/konnectivity-agent-tnf8j 1/1 Running 0 61s
kube-system pod/kube-proxy-ebpf-4gt2z 1/1 Running 0 6m48s
kube-system pod/kube-proxy-ebpf-ztjql 1/1 Running 0 6m46s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
default service/kubernetes ClusterIP 10.108.32.1 &amp;lt;none&amp;gt; 443/TCP 9m54s
kube-system service/hubble-peer ClusterIP 10.108.54.62 &amp;lt;none&amp;gt; 443/TCP 8m40s
kube-system service/hubble-relay ClusterIP 10.108.41.20 &amp;lt;none&amp;gt; 80/TCP 8m40s
kube-system service/hubble-ui ClusterIP 10.108.52.164 &amp;lt;none&amp;gt; 80/TCP 8m40s
kube-system service/kube-dns ClusterIP 10.108.32.10 &amp;lt;none&amp;gt; 53/UDP,53/TCP,9153/TCP 36s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Installation d’Headlamp, une interface web qui remplace le dashboard traditionnel de Kubernetes, facile à utiliser et extensible.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://headlamp.dev/" rel="noopener noreferrer"&gt;Headlamp&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/headlamp-k8s/headlamp" rel="noopener noreferrer"&gt;GitHub - headlamp-k8s/headlamp: A Kubernetes web UI that is fully-featured, user-friendly and extensible&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Headlamp a été créé pour combiner les fonctionnalités traditionnelles des autres interfaces web et tableaux de bord (c’est-à-dire pour lister et visualiser les ressources) avec des fonctionnalités supplémentaires.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kinvolk/headlamp/main/kubernetes-headlamp.yaml
service/headlamp created
deployment.apps/headlamp created
secret/headlamp-admin created
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# kubectl get po,svc &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system
NAME READY STATUS RESTARTS AGE
pod/cilium-4gxht 1/1 Running 0 11m
pod/cilium-lw8gd 1/1 Running 0 11m
pod/coredns-c5c6457c-bnzfc 1/1 Running 0 5m7s
pod/coredns-c5c6457c-nz6gr 1/1 Running 0 5m7s
pod/cpc-bridge-proxy-ebpf-7ncbq 1/1 Running 0 5m26s
pod/cpc-bridge-proxy-ebpf-qth8w 1/1 Running 0 5m26s
pod/csi-do-node-gxlmb 2/2 Running 0 4m24s
pod/csi-do-node-swqfv 2/2 Running 0 4m24s
pod/do-node-agent-7bgsh 1/1 Running 0 4m11s
pod/do-node-agent-hwt6l 1/1 Running 0 4m11s
pod/headlamp-7dfd97b98b-wmn66 1/1 Running 0 48s
pod/hubble-relay-67597fb8-kmlw5 1/1 Running 1 &lt;span class="o"&gt;(&lt;/span&gt;5m22s ago&lt;span class="o"&gt;)&lt;/span&gt; 13m
pod/hubble-ui-79957d9f7b-4n9kj 2/2 Running 0 5m45s
pod/konnectivity-agent-7ml7p 1/1 Running 0 5m32s
pod/konnectivity-agent-tnf8j 1/1 Running 0 5m32s
pod/kube-proxy-ebpf-4gt2z 1/1 Running 0 11m
pod/kube-proxy-ebpf-ztjql 1/1 Running 0 11m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
service/headlamp ClusterIP 10.108.38.247 &amp;lt;none&amp;gt; 80/TCP 48s
service/hubble-peer ClusterIP 10.108.54.62 &amp;lt;none&amp;gt; 443/TCP 13m
service/hubble-relay ClusterIP 10.108.41.20 &amp;lt;none&amp;gt; 80/TCP 13m
service/hubble-ui ClusterIP 10.108.52.164 &amp;lt;none&amp;gt; 80/TCP 13m
service/kube-dns ClusterIP 10.108.32.10 &amp;lt;none&amp;gt; 53/UDP,53/TCP,9153/TCP 5m7s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;en l’exposant localement et récupérant la clé nécessaire à l’accès à son interface web :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# &lt;span class="nb"&gt;nohup &lt;/span&gt;kubectl port-forward &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system service/headlamp 8080:80 &amp;amp;
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# &lt;span class="nb"&gt;cat &lt;/span&gt;nohup.out 
Forwarding from 127.0.0.1:8080 -&amp;gt; 4466
Forwarding from &lt;span class="o"&gt;[&lt;/span&gt;::1]:8080 -&amp;gt; 4466

&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system create serviceaccount headlamp-admin
serviceaccount/headlamp-admin created

&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# kubectl create clusterrolebinding headlamp-admin &lt;span class="nt"&gt;--serviceaccount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kube-system:headlamp-admin &lt;span class="nt"&gt;--clusterrole&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cluster-admin
clusterrolebinding.rbac.authorization.k8s.io/headlamp-admin created

&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# kubectl create token headlamp-admin &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system
eyJhbGciOiJSUzI1NiIsImtpZCI6Ikd0UHltNHV6c1liSkY0VkhNRWFZMXJKYklqY1R6ckZrMDZQS281dEg3dUUifQ.eyJhdWQiOlsic3lzdGVtOmtvbm5lY3Rpdml0eS1zZXJ2ZXIiXSwiZXhwIjoxNzM1MzkyNDEwLCJpYXQiOjE3MzUzODg4MTAsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiNTVkNjI1ZTItNjA2Yi00MTNhLTk2OTgtODFmYjdjZDU4MWY4Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJoZWFkbGFtcC1hZG1pbiIsInVpZCI6IjdmOGQwMTU0LWRiYWQtNGU2MS04NTUzLWU1NWI3ZWU0ZjhlOSJ9fSwibmJmIjoxNzM1Mzg4ODEwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06aGVhZGxhbXAtYWRtaW4ifQ.jbrtuXS7uMP6HfwR3CbIbnpRTq4CDaacq0okwm_4tvmJNNcExi9-Dti3cGj1J3tteszpxVzurWPhrWgFlL4UkEacY9fD1TRH4GAZDCFldJ_jvyeaclzGeymrjEGAZ9TbBdoyuXtLeIVhApdICF1KNM-s8mfr1oOREDwlR9HzzrhoECozYxVS9uM1WIEZpum4FwMEl6cKPqOyNx1Rn5MtKPcc87JyK0FxuXzg9WC-cPSNOxu_rUFrZYHyrVapCDpl_XLymD3pFUUuB8XPVidVXcVOthH1Djwm8TRE6aAD4XlkHTcyTYchvN_CpOI2JQ6DVY60unSU8nq2pxfqLC6G2Q
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7l2s2c29z8ubvf7m8a5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7l2s2c29z8ubvf7m8a5.png" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Avant de lancer une analyse via K8sGPT, introduction d’un problème dans le cluster Kubernetes pour simuler une situation réelle. Vous pouvez utiliser des exemples de déploiements défectueux disponibles sur des répositories comme celui de Robusta :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/robusta-dev/kubernetes-demos" rel="noopener noreferrer"&gt;GitHub - robusta-dev/kubernetes-demos: YAMLs for creating Kubernetes errors and other scenarios&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Exemple avec ce déploiement d’un Pod cassé via :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;payment-processing-worker&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;payment-processing-worker&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;payment-processing-worker&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;payment-processing-container&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bash&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/sh"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;if&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;[[-z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;${DEPLOY_ENV}&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;]];&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;then&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Environment&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;variable&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;DEPLOY_ENV&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;is&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;undefined&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;else&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;while&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;do&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;hello;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;10;done;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;fi"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="s"&gt;base) root@k8sgpt:~# kubectl apply -f https://raw.githubusercontent.com/robusta-dev/kubernetes-demos/main/crashpod/broken.yaml&lt;/span&gt;
&lt;span class="s"&gt;deployment.apps/payment-processing-worker created&lt;/span&gt;

&lt;span class="s"&gt;(base) root@k8sgpt:~# kubectl get po&lt;/span&gt;
&lt;span class="s"&gt;NAME READY STATUS RESTARTS AGE&lt;/span&gt;
&lt;span class="s"&gt;payment-processing-worker-747ccfb9db-dzjqx 0/1 CrashLoopBackOff 1 (11s ago) 17s&lt;/span&gt;

&lt;span class="s"&gt;(base) root@k8sgpt:~# kubectl logs po/payment-processing-worker-747ccfb9db-dzjqx&lt;/span&gt;
&lt;span class="s"&gt;Environment variable DEPLOY_ENV is undefined&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Il s’est en effet crashé …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadj78ma4diwfdqrt0wx2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadj78ma4diwfdqrt0wx2.png" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lancement de l’analyse avec une sortie en JSON :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;(base)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;root@k&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="err"&gt;sgpt:~#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;k&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="err"&gt;sgpt&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;analyze&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;-o&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;json&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;--explain&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;--filter=Pod&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;--backend&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;localai&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;jq&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"provider"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"localai"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"errors"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ProblemDetected"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"problems"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"results"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"kind"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Pod"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"default/payment-processing-worker-747ccfb9db-dzjqx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Text"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"the last termination reason is Completed container=payment-processing-container pod=payment-processing-worker-747ccfb9db-dzjqx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"KubernetesDoc"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Sensitive"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"details"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Error: The pod &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;payment-processing-worker-747ccfb9db-dzjqx&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; has completed its execution with a &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Completed&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; termination reason, indicating the container &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;payment-processing-container&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; has finished successfully.&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;Solution: Verify the logs for the container to ensure data integrity, then check related services for expected outcomes; if successful, mark the pod as ready in the cluster."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"parentObject"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Deployment/payment-processing-worker"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ou en mode texte :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# k8sgpt analyze &lt;span class="nt"&gt;--explain&lt;/span&gt; &lt;span class="nt"&gt;--backend&lt;/span&gt; localai &lt;span class="nt"&gt;--with-doc&lt;/span&gt;
 100% |████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| &lt;span class="o"&gt;(&lt;/span&gt;1/1, 7960 it/s&lt;span class="o"&gt;)&lt;/span&gt;        
AI Provider: localai

0: Pod default/payment-processing-worker-747ccfb9db-dzjqx&lt;span class="o"&gt;(&lt;/span&gt;Deployment/payment-processing-worker&lt;span class="o"&gt;)&lt;/span&gt;
- Error: the last termination reason is Completed &lt;span class="nv"&gt;container&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;payment-processing-container &lt;span class="nv"&gt;pod&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;payment-processing-worker-747ccfb9db-dzjqx
Error: The pod &lt;span class="s2"&gt;"payment-processing-worker-747ccfb9db-dzjqx"&lt;/span&gt; has completed its execution with a &lt;span class="s2"&gt;"Completed"&lt;/span&gt; termination reason, indicating the container &lt;span class="s2"&gt;"payment-processing-container"&lt;/span&gt; has finished successfully.

Solution: Verify the logs &lt;span class="k"&gt;for &lt;/span&gt;the container to ensure data integrity, &lt;span class="k"&gt;then &lt;/span&gt;check related services &lt;span class="k"&gt;for &lt;/span&gt;expected outcomes&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;if &lt;/span&gt;successful, mark the pod as ready &lt;span class="k"&gt;in &lt;/span&gt;the cluster.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Comme le montre cette sortie, K8sGPT a identifié et signalé le pod problématique et a également fourni des conseils sur les mesures potentielles à prendre pour comprendre et résoudre le problème.&lt;/p&gt;

&lt;p&gt;Autre exemple avec ce Pod Nginx problématique :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;inventory-management-api&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;wge&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-O"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/work-dir/index.html"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://home.robusta.dev&lt;/span&gt;

&lt;span class="s"&gt;(base) root@k8sgpt:~# kubectl apply -f https://raw.githubusercontent.com/robusta-dev/kubernetes-demos/refs/heads/main/crashloop_backoff/create_crashloop_backoff.yaml&lt;/span&gt;
&lt;span class="s"&gt;pod/inventory-management-api created&lt;/span&gt;

&lt;span class="s"&gt;(base) root@k8sgpt:~# kubectl get po&lt;/span&gt;
&lt;span class="s"&gt;NAME READY STATUS RESTARTS AGE&lt;/span&gt;
&lt;span class="s"&gt;inventory-management-api 0/1 ContainerCreating 0 5s&lt;/span&gt;

&lt;span class="s"&gt;(base) root@k8sgpt:~# kubectl get po&lt;/span&gt;
&lt;span class="s"&gt;NAME READY STATUS RESTARTS AGE&lt;/span&gt;
&lt;span class="s"&gt;inventory-management-api 0/1 RunContainerError 1 (1s ago) 10s&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5oufcpajkqntnz390nxk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5oufcpajkqntnz390nxk.png" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ufq9vgoeypy1br2064x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ufq9vgoeypy1br2064x.png" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsatiwyy30j9r8a9gk0xo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsatiwyy30j9r8a9gk0xo.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35mg92yvgooxdejiebtj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35mg92yvgooxdejiebtj.png" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Et une nouvelle analyse met le doigt sur le Pod problématique …&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# k8sgpt analyze &lt;span class="nt"&gt;--explain&lt;/span&gt; &lt;span class="nt"&gt;--backend&lt;/span&gt; localai &lt;span class="nt"&gt;--with-doc&lt;/span&gt;
 100% |████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| &lt;span class="o"&gt;(&lt;/span&gt;1/1, 2 it/min&lt;span class="o"&gt;)&lt;/span&gt;         
AI Provider: localai

0: Pod default/inventory-management-api&lt;span class="o"&gt;()&lt;/span&gt;
- Error: the last termination reason is StartError &lt;span class="nv"&gt;container&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nv"&gt;pod&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;inventory-management-api
Error: The Kubernetes error indicates that there was a StartError issue with the nginx container &lt;span class="k"&gt;for &lt;/span&gt;the pod named inventory-management-api.

Solution: 
1. Check the nginx configuration file &lt;span class="k"&gt;for &lt;/span&gt;syntax errors.
2. Ensure all required resources and permissions are correctly set.
3. Verify network accessibility within the pod.
4. Confirm proper image pull secrets &lt;span class="k"&gt;if &lt;/span&gt;using Docker images.
5. Review any recent changes to the deployment or service configurations.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Dans cet autre exemple il peut ne pas détecter de problématique avec cette simulation de faux positif via busybox :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;batch/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Job&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;java-api-checker&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;java-beans&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/sh"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'Java&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Network&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Exception:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;All&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;host(s)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;tried&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;db&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;query&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;failed&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(tried:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;prod-db:3333)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;no&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;available&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;connection&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;and&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;the&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;queue&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;has&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;reached&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;its&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;max&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;size&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;256&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;All&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;host(s)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;tried&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;db&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;query&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;failed&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(tried:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;prod-db:3333)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;no&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;available&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;connection&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;and&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;the&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;queue&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;has&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;reached&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;its&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;max&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;size&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;256&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;All&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;host(s)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;tried&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;db&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;query&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;failed&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(tried:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;prod-db:3333)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;no&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;available&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;connection&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;and&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;the&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;queue&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;has&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;reached&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;its&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;max&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;size&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;256&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;All&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;host(s)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;tried&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;db&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;query&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;failed&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(tried:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;prod-db:3333)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;no&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;available&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;connection&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;and&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;the&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;queue&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;has&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;reached&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;its&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;max&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;size&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;256';&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;60;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;exit&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Never&lt;/span&gt;
  &lt;span class="na"&gt;backoffLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;

&lt;span class="s"&gt;(base) root@k8sgpt:~# kubectl delete -f https://raw.githubusercontent.com/robusta-dev/kubernetes-demos/refs/heads/main/crashloop_backoff/create_crashloop_backoff.yaml&lt;/span&gt;
&lt;span class="s"&gt;pod "inventory-management-api" deleted&lt;/span&gt;
&lt;span class="s"&gt;(base) root@k8sgpt:~# kubectl apply -f https://raw.githubusercontent.com/robusta-dev/kubernetes-demos/refs/heads/main/job_failure/job_crash.yaml&lt;/span&gt;
&lt;span class="s"&gt;job.batch/java-api-checker created&lt;/span&gt;

&lt;span class="s"&gt;(base) root@k8sgpt:~# kubectl get po&lt;/span&gt;
&lt;span class="s"&gt;NAME READY STATUS RESTARTS AGE&lt;/span&gt;
&lt;span class="s"&gt;java-api-checker-5s6dc 1/1 Running 0 7s&lt;/span&gt;
&lt;span class="s"&gt;(base) root@k8sgpt:~# kubectl logs po/java-api-checker-5s6dc&lt;/span&gt;
&lt;span class="na"&gt;Java Network Exception&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
&lt;span class="na"&gt;All host(s) tried for db query failed (tried&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prod-db:3333) - no available connection and the queue has reached its max size 256&lt;/span&gt; 
&lt;span class="na"&gt;All host(s) tried for db query failed (tried&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prod-db:3333) - no available connection and the queue has reached its max size 256&lt;/span&gt; 
&lt;span class="na"&gt;All host(s) tried for db query failed (tried&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prod-db:3333) - no available connection and the queue has reached its max size 256&lt;/span&gt; 
&lt;span class="na"&gt;All host(s) tried for db query failed (tried&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prod-db:3333) - no available connection and the queue has reached its max size &lt;/span&gt;&lt;span class="m"&gt;256&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkt0ahcbrtd8qckhqx6qj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkt0ahcbrtd8qckhqx6qj.png" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfftj6cxupvsrzcc3es5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfftj6cxupvsrzcc3es5.png" width="800" height="397"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; root@k8sgpt:~# k8sgpt analyze &lt;span class="nt"&gt;--explain&lt;/span&gt; &lt;span class="nt"&gt;--backend&lt;/span&gt; localai &lt;span class="nt"&gt;--with-doc&lt;/span&gt;
AI Provider: localai

No problems detected
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pour une intégration plus complète, vous pouvez installer l’opérateur K8sGPT dans votre cluster Kubernetes. Cet opérateur surveille en continu les problèmes dans le cluster et génère des insights que vous pouvez consulter en interrogeant la ressource personnalisée (CR) de l'opérateur.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/k8sgpt-ai/k8sgpt-operator" rel="noopener noreferrer"&gt;GitHub - k8sgpt-ai/k8sgpt-operator: Automatic SRE Superpowers within your Kubernetes cluster&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm repo add k8sgpt https://charts.k8sgpt.ai/
$ helm repo update
$ helm install release k8sgpt/k8sgpt-operator -n k8sgpt-operator-system --create-namespace

# Installer l'opérateur k8sGPT
$ kubectl apply -n k8sgpt-operator-system -f - &amp;lt;&amp;lt; EOF
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
  name: k8sgpt-ollama
spec:
  ai:
    enabled: true
    model: falcon3
    backend: localai
    baseUrl: http://localhost:11434/v1
  noCache: false
  filters: ["Pod"]
  repository: ghcr.io/k8sgpt-ai/k8sgpt
  version: v0.3.48
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://www.suse.com/c/rancher_blog/debugging-your-rancher-kubernetes-cluster-the-genai-way-with-k8sgpt-ollama-rancher-desktop/" rel="noopener noreferrer"&gt;Debugging your Rancher Kubernetes Cluster the GenAI Way w...&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Il est également possible d’analyser plusieurs clusters Kubernetes en spécifiant le chemin vers le fichier Kubeconfig concerné :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;k8sgpt analyze &lt;span class="nt"&gt;--explain&lt;/span&gt; &lt;span class="nt"&gt;--backend&lt;/span&gt; localai &lt;span class="nt"&gt;--with-doc&lt;/span&gt; &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt; &amp;lt;chemin vers le fichier Kubeconfig&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;L’opérateur recherchera les problèmes dans le cluster et générera des résultats d’analyse. En fonction de la puissance de votre machine (pour accélerer les temps de réponse d’Ollama, des ressources GPU sont nécessaires), il faut un certain temps à l’opérateur pour appeler le LLM et générer les informations …&lt;/p&gt;

&lt;p&gt;Pour conclure, K8sGPT en combinaison avec Ollama offre une solution puissante pour déboguer et gérer les clusters Kubernetes de manière efficace. Cette intégration utilise l'intelligence artificielle pour fournir des insights clairs et des recommandations pour résoudre les problèmes, simplifiant ainsi la vie des opérateurs de cluster. En suivant ces étapes, vous pouvez mettre en place une solution de diagnostic automatisée et basée sur l'IA pour votre environnement Kubernetes …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bzmniacbwvd6cke17rt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bzmniacbwvd6cke17rt.jpg" width="750" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;À suivre !&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>genai</category>
      <category>docker</category>
    </item>
    <item>
      <title>OpenStack sur Kubernetes en toute simplicité avec Canonical MicroStack et Pulumi …</title>
      <dc:creator>Karim</dc:creator>
      <pubDate>Tue, 24 Dec 2024 12:48:32 +0000</pubDate>
      <link>https://forem.com/deep75/openstack-sur-kubernetes-en-toute-simplicite-avec-canonical-microstack-et-pulumi--1mi6</link>
      <guid>https://forem.com/deep75/openstack-sur-kubernetes-en-toute-simplicite-avec-canonical-microstack-et-pulumi--1mi6</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3onpws5qm0xj3130udyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3onpws5qm0xj3130udyd.png" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;MicroStack est une solution développée par Canonical, utilisant Snap, Juju et Kubernetes pour déployer et gérer OpenStack. Cette solution simplifie considérablement la complexité traditionnellement associée à la gestion des clouds OpenStack. Elle a évolué comme je le relatais auparavant dans différents articles en y insérant Canonical Kubernetes :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/deep75/k8s-snap-canonical-kubernetes-pour-un-deploiement-simple-et-rapide-dun-cluster-k8s--4hi5"&gt;k8s-snap (Canonical Kubernetes) pour un déploiement simple et rapide d’un cluster k8s …&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://deep75.medium.com/un-cluster-openstack-sur-sa-station-en-une-minute-avec-microstack-8da478196299" rel="noopener noreferrer"&gt;Un cluster OpenStack sur sa station en une minute avec Microstack …&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://deep75.medium.com/mise-en-oeuvre-dun-service-mesh-avec-microstack-linkerd-2-x-et-k3s-23242310246c" rel="noopener noreferrer"&gt;Mise en oeuvre d’un Service Mesh avec MicroStack, Linkerd 2.x et K3S …&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Avantages de MicroStack
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://canonical.com/microstack" rel="noopener noreferrer"&gt;OpenStack on Kubernetes | Ubuntu | Canonical&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Déploiement Simplifié&lt;/strong&gt;  : MicroStack offre un processus d’installation rationalisé, capable de démarrer un déploiement cloud en moins de 6 commandes, avec un temps de déploiement moyen de 40 minutes. Cela le rend particulièrement adapté aux organisations qui cherchent à établir ou à étendre rapidement un environnement Cloud sans disposer d’une expertise technique approfondie.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibilité et Customisation&lt;/strong&gt;  : MicroStack permet une grande flexibilité, incluant l’intégration d’un large éventail de plug-ins et d’extensions, ce qui permet aux entreprises de construire un environnement Cloud qui s’aligne précisément sur leurs objectifs opérationnels.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Mise en oeuvre concrète de Microstack sur un noeud unique selon ces préconisations techniques :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://canonical.com/microstack/docs/enterprise-reqs" rel="noopener noreferrer"&gt;Enterprise requirements | Canonical&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnfyhzwr7q7q7ix69ekw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnfyhzwr7q7q7ix69ekw.png" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Je pars donc d’une instance dédiée sur DigitalOcean autorisant la virtualisation imbriquée :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5ywi3g3f2l3kvo5oed4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5ywi3g3f2l3kvo5oed4.png" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pour installer MicroStack sur un nœud unique, vous pouvez suivre ces étapes. Pour commencer, création d’un utilisateur non root avec sudo activé :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@microstack:~# useradd &lt;span class="nt"&gt;-s&lt;/span&gt; /bin/bash &lt;span class="nt"&gt;-d&lt;/span&gt; /home/ubuntu &lt;span class="nt"&gt;-m&lt;/span&gt; ubuntu
root@microstack:~# &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu ALL=(ALL) NOPASSWD: ALL"&lt;/span&gt; | &lt;span class="nb"&gt;tee&lt;/span&gt; /etc/sudoers.d/ubuntu
ubuntu &lt;span class="nv"&gt;ALL&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;ALL&lt;span class="o"&gt;)&lt;/span&gt; NOPASSWD: ALL
root@microstack:~# &lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; .ssh/ /home/ubuntu/
root@microstack:~# &lt;span class="nb"&gt;chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; ubuntu:ubuntu /home/ubuntu/.ssh/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Puis installation de snapd :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;snapd
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Suggested packages:
  zenity | kdialog
The following packages will be upgraded:
  snapd
1 upgraded, 0 newly installed, 0 to remove and 195 not upgraded.
Need to get 30.0 MB of archives.
After this operation, 5513 kB of additional disk space will be used.
Get:1 http://mirrors.digitalocean.com/ubuntu noble-updates/main amd64 snapd amd64 2.66.1+24.04 &lt;span class="o"&gt;[&lt;/span&gt;30.0 MB]
Fetched 30.0 MB &lt;span class="k"&gt;in &lt;/span&gt;0s &lt;span class="o"&gt;(&lt;/span&gt;93.2 MB/s&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;(&lt;/span&gt;Reading database ... 71895 files and directories currently installed.&lt;span class="o"&gt;)&lt;/span&gt;
Preparing to unpack .../snapd_2.66.1+24.04_amd64.deb ...
Unpacking snapd &lt;span class="o"&gt;(&lt;/span&gt;2.66.1+24.04&lt;span class="o"&gt;)&lt;/span&gt; over &lt;span class="o"&gt;(&lt;/span&gt;2.63+24.04&lt;span class="o"&gt;)&lt;/span&gt; ...
Setting up snapd &lt;span class="o"&gt;(&lt;/span&gt;2.66.1+24.04&lt;span class="o"&gt;)&lt;/span&gt; ...
Installing new version of config file /etc/apparmor.d/usr.lib.snapd.snap-confine.real ...
snapd.failure.service is a disabled or a static unit not running, not starting it.
snapd.snap-repair.service is a disabled or a static unit not running, not starting it.
Processing triggers &lt;span class="k"&gt;for &lt;/span&gt;dbus &lt;span class="o"&gt;(&lt;/span&gt;1.14.10-4ubuntu4&lt;span class="o"&gt;)&lt;/span&gt; ...
Processing triggers &lt;span class="k"&gt;for &lt;/span&gt;man-db &lt;span class="o"&gt;(&lt;/span&gt;2.12.0-4build2&lt;span class="o"&gt;)&lt;/span&gt; ...
Scanning processes...                                                                                                                                                                         
Scanning linux images...                                                                                                                                                                      

Running kernel seems to be up-to-date.

No services need to be restarted.

No containers need to be restarted.

No user sessions are running outdated binaries.

No VM guests are running outdated hypervisor &lt;span class="o"&gt;(&lt;/span&gt;qemu&lt;span class="o"&gt;)&lt;/span&gt; binaries on this host.
ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; snapd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Installation de MicroStack via la dernière version avec snapd :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://snapcraft.io/openstack" rel="noopener noreferrer"&gt;Install Canonical MicroStack on Linux | Snap Store&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;snap &lt;span class="nb"&gt;install &lt;/span&gt;openstack &lt;span class="nt"&gt;--channel&lt;/span&gt; 2024.1/beta
2024-12-24T08:58:22Z INFO Waiting &lt;span class="k"&gt;for &lt;/span&gt;automatic snapd restart...
openstack &lt;span class="o"&gt;(&lt;/span&gt;2024.1/beta&lt;span class="o"&gt;)&lt;/span&gt; 2024.1 from Canonical✓ installed
ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;snap list
Name Version Rev Tracking Publisher Notes
core24 20240920 609 latest/stable canonical✓ base
openstack 2024.1 637 2024.1/beta canonical✓ -
snapd 2.66.1 23258 latest/stable canonical✓ snapd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;MicroStack utilise Sunbeam pour générer un script qui s’assure que la machine dispose de toutes les dépendances nécessaires et qu’elle est configurée correctement pour être utilisée dans OpenStack. Lancement de ce dernier de manière directe :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;sunbeam prepare-node-script | bash &lt;span class="nt"&gt;-x&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; newgrp snap_daemon
++ lsb_release &lt;span class="nt"&gt;-sc&lt;/span&gt;
+ &lt;span class="s1"&gt;'['&lt;/span&gt; noble &lt;span class="s1"&gt;'!='&lt;/span&gt; noble &lt;span class="s1"&gt;']'&lt;/span&gt;
++ &lt;span class="nb"&gt;whoami&lt;/span&gt;
+ &lt;span class="nv"&gt;USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ubuntu
++ &lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;
+ &lt;span class="s1"&gt;'['&lt;/span&gt; 1000 &lt;span class="nt"&gt;-eq&lt;/span&gt; 0 &lt;span class="nt"&gt;-o&lt;/span&gt; ubuntu &lt;span class="o"&gt;=&lt;/span&gt; root &lt;span class="s1"&gt;']'&lt;/span&gt;
+ &lt;span class="nv"&gt;SUDO_ASKPASS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/bin/false
+ &lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-A&lt;/span&gt; &lt;span class="nb"&gt;whoami&lt;/span&gt;
+ &lt;span class="nb"&gt;sudo grep&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; ubuntu /etc/sudoers /etc/sudoers.d
+ &lt;span class="nb"&gt;grep &lt;/span&gt;NOPASSWD:ALL
+ &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'ubuntu ALL=(ALL) NOPASSWD:ALL'&lt;/span&gt;
+ &lt;span class="nb"&gt;sudo install&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; 440 /tmp/90-ubuntu-sudo-access /etc/sudoers.d/90-ubuntu-sudo-access
+ &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; /tmp/90-ubuntu-sudo-access
+ dpkg &lt;span class="nt"&gt;-s&lt;/span&gt; openssh-server
+ dpkg &lt;span class="nt"&gt;-s&lt;/span&gt; curl
+ &lt;span class="nb"&gt;sudo &lt;/span&gt;usermod &lt;span class="nt"&gt;--append&lt;/span&gt; &lt;span class="nt"&gt;--groups&lt;/span&gt; snap_daemon ubuntu
+ &lt;span class="s1"&gt;'['&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; /home/ubuntu/.ssh/id_rsa &lt;span class="s1"&gt;']'&lt;/span&gt;
+ ssh-keygen &lt;span class="nt"&gt;-b&lt;/span&gt; 4096 &lt;span class="nt"&gt;-f&lt;/span&gt; /home/ubuntu/.ssh/id_rsa &lt;span class="nt"&gt;-t&lt;/span&gt; rsa &lt;span class="nt"&gt;-N&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt;
Generating public/private rsa key pair.
Your identification has been saved &lt;span class="k"&gt;in&lt;/span&gt; /home/ubuntu/.ssh/id_rsa
Your public key has been saved &lt;span class="k"&gt;in&lt;/span&gt; /home/ubuntu/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:NTnupee3yat23zAuoYy5U6VoXqK+JaU5R3L36nwrauM ubuntu@microstack
The key&lt;span class="s1"&gt;'s randomart image is:
+---[RSA 4096]----+
| |
| . |
| = |
| o o. |
| . S.oo. |
| B+o++. |
| =+oBo.o.o |
| .=B.+++oo+.|
| .o+E*+++**+o|
+----[SHA256]-----+
+ cat /home/ubuntu/.ssh/id_rsa.pub
++ hostname --all-ip-addresses
+ ssh-keyscan -H 134.209.225.128 10.19.0.5 10.114.0.2
# 134.209.225.128:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4
# 134.209.225.128:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4
# 134.209.225.128:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4
# 134.209.225.128:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4
# 134.209.225.128:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4
# 10.19.0.5:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4
# 10.19.0.5:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4
# 10.19.0.5:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4
# 10.19.0.5:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4
# 10.19.0.5:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4
# 10.114.0.2:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4
# 10.114.0.2:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4
# 10.114.0.2:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4
# 10.114.0.2:22 SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.4
10.114.0.2: Connection closed by remote host
+ grep -E '&lt;/span&gt;HTTPS?_PROXY&lt;span class="s1"&gt;' /etc/environment
+ curl -s -m 10 -x '' api.charmhub.io
+ sudo snap connect openstack:ssh-keys
+ sudo snap install --channel 3.6/stable juju
juju (3.6/stable) 3.6.1 from Canonical✓ installed
+ mkdir -p /home/ubuntu/.local/share
+ mkdir -p /home/ubuntu/.config/openstack
++ snap list openstack --unicode=never --color=never
++ grep openstack
+ snap_output='&lt;/span&gt;openstack 2024.1 637 2024.1/beta canonical&lt;span class="k"&gt;**&lt;/span&gt; -&lt;span class="s1"&gt;'
++ awk -v col=4 '&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt;print &lt;span class="nv"&gt;$col&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="s1"&gt;'
+ track=2024.1/beta
+ [[2024.1/beta =~ edge]]
+ [[2024.1/beta == \-]]
+ [[2024.1/beta =~ beta]]
+ risk=beta
+ [[beta != \s\t\a\b\l\e]]
+ sudo snap set openstack deployment.risk=beta
+ echo '&lt;/span&gt;Snap has been automatically configured to deploy from&lt;span class="s1"&gt;' '&lt;/span&gt;beta channel.&lt;span class="s1"&gt;'
Snap has been automatically configured to deploy from beta channel.
+ echo '&lt;/span&gt;Override by passing a custom manifest with &lt;span class="nt"&gt;-m&lt;/span&gt;/--manifest.&lt;span class="s1"&gt;'
Override by passing a custom manifest with -m/--manifest.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Possibilité à cette étape de déployer le cloud OpenStack à l’aide de cette commande en acceptant les valeurs par défaut (cela prend environ 30 minutes en fonction de la vitesse de connexion) :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;sunbeam cluster bootstrap &lt;span class="nt"&gt;--accept-defaults&lt;/span&gt;
Node has been bootstrapped with roles: compute, control

ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;sunbeam cluster list
                           controller                           
┏━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┓
┃ Node ┃ Cluster ┃ Machine ┃ Compute ┃ Control ┃ Storage ┃
┡━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━┩
│ microstack │ ONLINE │ running │ active │ active │ │
└────────────┴─────────┴─────────┴─────────┴─────────┴─────────┘

ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status snap.openstack.clusterd.service
● snap.openstack.clusterd.service - Service &lt;span class="k"&gt;for &lt;/span&gt;snap application openstack.clusterd
     Loaded: loaded &lt;span class="o"&gt;(&lt;/span&gt;/etc/systemd/system/snap.openstack.clusterd.service&lt;span class="p"&gt;;&lt;/span&gt; enabled&lt;span class="p"&gt;;&lt;/span&gt; preset: enabled&lt;span class="o"&gt;)&lt;/span&gt;
     Active: active &lt;span class="o"&gt;(&lt;/span&gt;running&lt;span class="o"&gt;)&lt;/span&gt; since Tue 2024-12-24 08:58:36 UTC&lt;span class="p"&gt;;&lt;/span&gt; 1h 10min ago
   Main PID: 4497 &lt;span class="o"&gt;(&lt;/span&gt;sunbeamd&lt;span class="o"&gt;)&lt;/span&gt;
      Tasks: 18 &lt;span class="o"&gt;(&lt;/span&gt;limit: 77123&lt;span class="o"&gt;)&lt;/span&gt;
     Memory: 34.6M &lt;span class="o"&gt;(&lt;/span&gt;peak: 39.2M&lt;span class="o"&gt;)&lt;/span&gt;
        CPU: 11.947s
     CGroup: /system.slice/snap.openstack.clusterd.service
             └─4497 sunbeamd &lt;span class="nt"&gt;--state-dir&lt;/span&gt; /var/snap/openstack/common/state &lt;span class="nt"&gt;--socket-group&lt;/span&gt; snap_daemon &lt;span class="nt"&gt;--verbose&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;MicroStack est alors déployé et on peut lancer l’environnement de démo qui crée un tenant préconfiguré (avec les accès au dashboard) :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ubuntu@microstack:~$ sunbeam configure --accept-defaults --openrc demo-openrc
⠋ Generating openrc for cloud admin usage ... Writing openrc to demo-openrc ... done
The cloud has been configured for sample usage.
You can start using the OpenStack client or access the OpenStack dashboard at http://172.16.1.204:80/openstack-horizon 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ktscew8gzyuoc6nvdn7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ktscew8gzyuoc6nvdn7.jpg" width="622" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Je dispose des identifiants pour ce dernier :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;demo-openrc 
&lt;span class="c"&gt;# openrc for demo&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_AUTH_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://172.16.1.204/openstack-keystone/v3
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_USERNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;demo
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;C0jg0mAgdvD5
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_USER_DOMAIN_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;users
export &lt;/span&gt;&lt;span class="nv"&gt;OS_PROJECT_DOMAIN_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;users
export &lt;/span&gt;&lt;span class="nv"&gt;OS_PROJECT_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;demo
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_AUTH_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_IDENTITY_API_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ou pour le compte admin :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;sunbeam openrc &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; admin-openrc
ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;admin-openrc 
&lt;span class="c"&gt;# openrc for access to OpenStack&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_USERNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;admin
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;f7M1ey2dqpHo
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_AUTH_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://172.16.1.204/openstack-keystone/v3
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_USER_DOMAIN_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;admin_domain
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_PROJECT_DOMAIN_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;admin_domain
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_PROJECT_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;admin
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_AUTH_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_IDENTITY_API_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lancement rapide d’une instance de test :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;sunbeam launch ubuntu &lt;span class="nt"&gt;--name&lt;/span&gt; instance1 
Launching an OpenStack instance ... 
Access the instance by running the following &lt;span class="nb"&gt;command&lt;/span&gt;:
&lt;span class="sb"&gt;`&lt;/span&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; /home/ubuntu/snap/openstack/637/sunbeam ubuntu@172.16.2.31&lt;span class="sb"&gt;`&lt;/span&gt;

ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;source &lt;/span&gt;demo-openrc 
ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;openstack server list
+--------------------------------------+-----------+--------+-----------------------------------------+--------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-----------+--------+-----------------------------------------+--------+---------+
| efe46971-56f4-4da4-9c6e-eebee2795b72 | instance1 | ACTIVE | demo-network&lt;span class="o"&gt;=&lt;/span&gt;172.16.2.31, 192.168.0.166 | ubuntu | m1.tiny |
+--------------------------------------+-----------+--------+-----------------------------------------+--------+---------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;J’accède ici au dashboard via une redirection ssh :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ssh &lt;span class="nt"&gt;-L&lt;/span&gt; 0.0.0.0:8888:172.16.1.204:80 ubuntu@134.209.225.128
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqhlf04d6w2kbnvydu0a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqhlf04d6w2kbnvydu0a.png" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7stllazkmqn5wo3s98gi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7stllazkmqn5wo3s98gi.png" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4925kskn0krf5mlbasa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4925kskn0krf5mlbasa.png" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmz8relr6ktcc2xq0a82n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmz8relr6ktcc2xq0a82n.png" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2le8v5oi2bb6skfh6u9v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2le8v5oi2bb6skfh6u9v.png" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dans MicroStack, Sunbeam permet l’activation de plusieurs extensions intéressantes :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;sunbeam &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--help&lt;/span&gt;
Usage: sunbeam &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;OPTIONS] COMMAND &lt;span class="o"&gt;[&lt;/span&gt;ARGS]...

  Enable features.

Options:
  &lt;span class="nt"&gt;-m&lt;/span&gt;, &lt;span class="nt"&gt;--manifest&lt;/span&gt; FILE Manifest file.
  &lt;span class="nt"&gt;-h&lt;/span&gt;, &lt;span class="nt"&gt;--help&lt;/span&gt; Show this message and exit.

Commands:
  caas Enable Container as a Service feature.
  dns Enable dns service.
  images-sync Enable images-sync service.
  ldap Enable ldap service.
  loadbalancer Enable Loadbalancer service.
  observability Enable Observability service.
  orchestration Enable Orchestration service.
  pro Enable Ubuntu Pro across deployment.
  resource-optimization Enable Resource Optimization service &lt;span class="o"&gt;(&lt;/span&gt;watcher&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
  secrets Enable OpenStack Secrets service.
  telemetry Enable OpenStack Telemetry applications.
  tls Enable tls group.
  validation Enable OpenStack Integration Test Suite &lt;span class="o"&gt;(&lt;/span&gt;tempest&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
  vault Enable Vault.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Activation de plusieures d’entre elles …&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;sunbeam &lt;span class="nb"&gt;enable &lt;/span&gt;orchestration
OpenStack orchestration application enabled.
ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;sunbeam &lt;span class="nb"&gt;enable &lt;/span&gt;telemetry
OpenStack telemetry application enabled.
ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;sunbeam &lt;span class="nb"&gt;enable &lt;/span&gt;observability embedded
Observability enabled.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Dont celle relative à la stack d’observabilité mettant en oeuvre Grafana dont on obtient ici le lien et les identifiants :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://canonical.com/microstack/docs/observability" rel="noopener noreferrer"&gt;Observability | Canonical&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
(base) ubuntu@microstack:~$ sunbeam observability dashboard-url
http://172.16.1.205/observability-grafana

(base) ubuntu@microstack:~$ juju run --model observability grafana/leader get-admin-password
Running operation 5 with 1 task
  - task 6 on unit-grafana-0

Waiting for task 6...
admin-password: 0EAJrXNIt3jd
url: http://172.16.1.205/observability-grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sunbeam utilise une série de manifestes dont on retrouve la liste à la suite de tous ces déploiements :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;sunbeam manifest list
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┓
┃ ID ┃ Applied Date ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━┩
│ ecd1a516be9a244376923e9f7b8217ce │ 2024-12-24 09:00:10 │
└──────────────────────────────────┴─────────────────────┘

ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;sunbeam manifest show ecd1a516be9a244376923e9f7b8217ce
core:
  software:
    charms:
      cinder-ceph-k8s:
        channel: 2024.1/beta
      cinder-k8s:
        channel: 2024.1/beta
      glance-k8s:
        channel: 2024.1/beta
      horizon-k8s:
        channel: 2024.1/beta
      keystone-k8s:
        channel: 2024.1/beta
      microceph:
        channel: squid/beta
        config:
          snap-channel: squid/beta
      neutron-k8s:
        channel: 2024.1/beta
      nova-k8s:
        channel: 2024.1/beta
      openstack-hypervisor:
        channel: 2024.1/beta
        config:
          snap-channel: 2024.1/beta
      ovn-central-k8s:
        channel: 24.03/beta
      ovn-relay-k8s:
        channel: 24.03/beta
      placement-k8s:
        channel: 2024.1/beta
      sunbeam-clusterd:
        channel: 2024.1/beta
        config:
          snap-channel: 2024.1/beta
      sunbeam-machine:
        channel: 2024.1/beta
features:
  caas:
    software:
      charms:
        magnum-k8s:
          channel: 2024.1/beta
  dns:
    software:
      charms:
        designate-bind-k8s:
          channel: 9/beta
        designate-k8s:
          channel: 2024.1/beta
  images-sync:
    software:
      charms:
        openstack-images-sync-k8s:
          channel: 2024.1/beta
  instance-recovery:
    software:
      charms:
        consul-client:
          channel: 1.19/beta
        consul-k8s:
          channel: 1.19/beta
        masakari-k8s:
          channel: 2024.1/beta
  ldap:
    software:
      charms:
        keystone-ldap-k8s:
          channel: 2024.1/beta
  loadbalancer:
    software:
      charms:
        octavia-k8s:
          channel: 2024.1/beta
  orchestration:
    software:
      charms:
        heat-k8s:
          channel: 2024.1/beta
  resource-optimization:
    software:
      charms:
        watcher-k8s:
          channel: 2024.1/beta
  secrets:
    software:
      charms:
        barbican-k8s:
          channel: 2024.1/beta
  telemetry:
    software:
      charms:
        aodh-k8s:
          channel: 2024.1/beta
        ceilometer-k8s:
          channel: 2024.1/beta
        gnocchi-k8s:
          channel: 2024.1/beta
        openstack-exporter-k8s:
          channel: 2024.1/beta
  validation:
    software:
      charms:
        tempest-k8s:
          channel: 2024.1/beta
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Dont on peut aussi vérifier la présence via Juju :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;juju status &lt;span class="nt"&gt;-m&lt;/span&gt; admin/controller
Model Controller Cloud/Region Version SLA Timestamp
controller sunbeam-controller one-deer/default 3.6.1 unsupported 10:04:08Z

SAAS Status Store URL
ceilometer waiting &lt;span class="nb"&gt;local &lt;/span&gt;microstack/openstack.ceilometer
cert-distributor active &lt;span class="nb"&gt;local &lt;/span&gt;microstack/openstack.cert-distributor
certificate-authority active &lt;span class="nb"&gt;local &lt;/span&gt;microstack/openstack.certificate-authority
cinder-ceph blocked &lt;span class="nb"&gt;local &lt;/span&gt;microstack/openstack.cinder-ceph
grafana-dashboards active &lt;span class="nb"&gt;local &lt;/span&gt;microstack/observability.grafana-dashboards
keystone-credentials active &lt;span class="nb"&gt;local &lt;/span&gt;microstack/openstack.keystone-credentials
keystone-endpoints active &lt;span class="nb"&gt;local &lt;/span&gt;microstack/openstack.keystone-endpoints
loki-logging active &lt;span class="nb"&gt;local &lt;/span&gt;microstack/observability.loki-logging
nova active &lt;span class="nb"&gt;local &lt;/span&gt;microstack/openstack.nova
ovn-relay active &lt;span class="nb"&gt;local &lt;/span&gt;microstack/openstack.ovn-relay
prometheus-receive-remote-write active &lt;span class="nb"&gt;local &lt;/span&gt;microstack/observability.prometheus-receive-remote-write
rabbitmq active &lt;span class="nb"&gt;local &lt;/span&gt;microstack/openstack.rabbitmq

App Version Status Scale Charm Channel Rev Exposed Message
controller active 1 juju-controller 3.6/stable 116 no       
grafana-agent active 1 grafana-agent latest/stable 260 no tracing: off
k8s 1.31.3 active 1 k8s 1.31/candidate 141 no Ready
microceph unknown 0 microceph squid/beta 84 no       
openstack-hypervisor waiting 1 openstack-hypervisor 2024.1/beta 221 no &lt;span class="o"&gt;(&lt;/span&gt;ceph-access&lt;span class="o"&gt;)&lt;/span&gt; integration incomplete
sunbeam-machine active 1 sunbeam-machine 2024.1/beta 49 no       

Unit Workload Agent Machine Public address Ports Message
controller/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 0 134.209.225.128            
k8s/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 0 134.209.225.128 6443/tcp Ready
openstack-hypervisor/0&lt;span class="k"&gt;*&lt;/span&gt; waiting idle 0 134.209.225.128 &lt;span class="o"&gt;(&lt;/span&gt;ceph-access&lt;span class="o"&gt;)&lt;/span&gt; integration incomplete
  grafana-agent/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 134.209.225.128 tracing: off
sunbeam-machine/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 0 134.209.225.128            

Machine State Address Inst &lt;span class="nb"&gt;id &lt;/span&gt;Base AZ Message
0 started 134.209.225.128 manual: ubuntu@24.04 Manually provisioned machine

Offer Application Charm Rev Connected Endpoint Interface Role
microceph microceph microceph 84 0/0 ceph ceph-client provider

ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;juju status &lt;span class="nt"&gt;-m&lt;/span&gt; openstack
Model Controller Cloud/Region Version SLA Timestamp
openstack sunbeam-controller one-deer-k8s/localhost 3.6.1 unsupported 10:04:48Z

SAAS Status Store URL
grafana-dashboards active &lt;span class="nb"&gt;local &lt;/span&gt;microstack/observability.grafana-dashboards
loki-logging active &lt;span class="nb"&gt;local &lt;/span&gt;microstack/observability.loki-logging
prometheus-receive-remote-write active &lt;span class="nb"&gt;local &lt;/span&gt;microstack/observability.prometheus-receive-remote-write

App Version Status Scale Charm Channel Rev Address Exposed Message
aodh active 1 aodh-k8s 2024.1/beta 62 10.152.183.238 no       
aodh-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.134 no       
aodh-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.200 no       
ceilometer waiting 1 ceilometer-k8s 2024.1/beta 62 10.152.183.88 no &lt;span class="o"&gt;(&lt;/span&gt;workload&lt;span class="o"&gt;)&lt;/span&gt; Not all relations are ready
certificate-authority active 1 self-signed-certificates latest/beta 228 10.152.183.220 no       
cinder active 1 cinder-k8s 2024.1/beta 99 10.152.183.160 no       
cinder-ceph blocked 1 cinder-ceph-k8s 2024.1/beta 97 10.152.183.202 no &lt;span class="o"&gt;(&lt;/span&gt;ceph&lt;span class="o"&gt;)&lt;/span&gt; integration missing
cinder-ceph-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.244 no       
cinder-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.77 no       
cinder-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.92 no       
glance active 1 glance-k8s 2024.1/beta 120 10.152.183.187 no       
glance-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.163 no       
glance-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.116 no       
gnocchi blocked 1 gnocchi-k8s 2024.1/beta 61 10.152.183.81 no &lt;span class="o"&gt;(&lt;/span&gt;ceph&lt;span class="o"&gt;)&lt;/span&gt; integration missing
gnocchi-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.35 no       
gnocchi-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.72 no       
grafana-agent 0.40.4 active 1 grafana-agent-k8s latest/stable 80 10.152.183.169 no       
heat active 1 heat-k8s 2024.1/beta 79 10.152.183.151 no       
heat-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.222 no       
heat-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.249 no       
horizon active 1 horizon-k8s 2024.1/beta 111 10.152.183.234 no http://172.16.1.204/openstack-horizon
horizon-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.131 no       
horizon-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.140 no       
keystone active 1 keystone-k8s 2024.1/beta 213 10.152.183.63 no       
keystone-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.48 no       
keystone-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.108 no       
neutron active 1 neutron-k8s 2024.1/beta 119 10.152.183.212 no       
neutron-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.75 no       
neutron-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.197 no       
nova active 1 nova-k8s 2024.1/beta 109 10.152.183.104 no       
nova-api-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.189 no       
nova-cell-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.178 no       
nova-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.24 no       
nova-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.152 no       
openstack-exporter active 1 openstack-exporter-k8s 2024.1/beta 72 10.152.183.100 no       
ovn-central active 1 ovn-central-k8s 24.03/beta 110 10.152.183.194 no       
ovn-relay active 1 ovn-relay-k8s 24.03/beta 97 172.16.1.201 no       
placement active 1 placement-k8s 2024.1/beta 92 10.152.183.199 no       
placement-mysql 8.0.37-0ubuntu0.22.04.3 active 1 mysql-k8s 8.0/stable 180 10.152.183.83 no       
placement-mysql-router 8.0.37-0ubuntu0.22.04.3 active 1 mysql-router-k8s 8.0/stable 155 10.152.183.248 no       
rabbitmq 3.12.1 active 1 rabbitmq-k8s 3.12/stable 34 172.16.1.202 no       
traefik 2.11.0 active 1 traefik-k8s latest/beta 223 10.152.183.125 no Serving at 172.16.1.203
traefik-public 2.11.0 active 1 traefik-k8s latest/beta 223 10.152.183.54 no Serving at 172.16.1.204
vault blocked 1 vault-k8s 1.16/stable 280 10.152.183.78 no Please initialize Vault or integrate with an auto-unseal provider

Unit Workload Agent Address Ports Message
aodh-mysql-router/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.57          
aodh-mysql/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.6 Primary
aodh/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.90          
ceilometer/0&lt;span class="k"&gt;*&lt;/span&gt; waiting idle 10.1.0.149 &lt;span class="o"&gt;(&lt;/span&gt;workload&lt;span class="o"&gt;)&lt;/span&gt; Not all relations are ready
certificate-authority/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.5           
cinder-ceph-mysql-router/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.167         
cinder-ceph/0&lt;span class="k"&gt;*&lt;/span&gt; blocked idle 10.1.0.108 &lt;span class="o"&gt;(&lt;/span&gt;ceph&lt;span class="o"&gt;)&lt;/span&gt; integration missing
cinder-mysql-router/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.253         
cinder-mysql/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.145 Primary
cinder/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.56          
glance-mysql-router/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.85          
glance-mysql/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.183 Primary
glance/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.251         
gnocchi-mysql-router/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.196         
gnocchi-mysql/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.213 Primary
gnocchi/0&lt;span class="k"&gt;*&lt;/span&gt; blocked idle 10.1.0.55 &lt;span class="o"&gt;(&lt;/span&gt;ceph&lt;span class="o"&gt;)&lt;/span&gt; integration missing
grafana-agent/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.2           
heat-mysql-router/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.54          
heat-mysql/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.9 Primary
heat/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.138         
horizon-mysql-router/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.248         
horizon-mysql/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.185 Primary
horizon/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.35          
keystone-mysql-router/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.243         
keystone-mysql/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.104 Primary
keystone/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.223         
neutron-mysql-router/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.135         
neutron-mysql/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.79 Primary
neutron/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.23          
nova-api-mysql-router/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.93          
nova-cell-mysql-router/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.165         
nova-mysql-router/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.143         
nova-mysql/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.226 Primary
nova/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.147         
openstack-exporter/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.100         
ovn-central/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.222         
ovn-relay/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.82          
placement-mysql-router/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.224         
placement-mysql/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.148 Primary
placement/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.78          
rabbitmq/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.238         
traefik-public/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.96 Serving at 172.16.1.204
traefik/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 10.1.0.151 Serving at 172.16.1.203
vault/0&lt;span class="k"&gt;*&lt;/span&gt; blocked idle 10.1.0.178 Please initialize Vault or integrate with an auto-unseal provider

Offer Application Charm Rev Connected Endpoint Interface Role
ceilometer ceilometer ceilometer-k8s 62 1/1 ceilometer-service ceilometer provider
cert-distributor keystone keystone-k8s 213 2/2 send-ca-cert certificate_transfer provider
certificate-authority certificate-authority self-signed-certificates 228 1/1 certificates tls-certificates provider
cinder-ceph cinder-ceph cinder-ceph-k8s 97 1/1 ceph-access cinder-ceph-key provider
keystone-credentials keystone keystone-k8s 213 1/1 identity-credentials keystone-credentials provider
keystone-endpoints keystone keystone-k8s 213 1/1 identity-service keystone provider
nova nova nova-k8s 109 1/1 nova-service nova provider
ovn-relay ovn-relay ovn-relay-k8s 97 1/1 ovsdb-cms-relay ovsdb-cms provider
rabbitmq rabbitmq rabbitmq-k8s 34 1/1 amqp rabbitmq provider
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgb2266glru0gz6h7ry8e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgb2266glru0gz6h7ry8e.png" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ou via Canonical Kubernetes (qui supporte OpenStack) :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ubuntu@microstack:~$ sudo k8s kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/coredns:udp-53/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
ubuntu@microstack:~$ sudo k8s kubectl get po,svc -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/cilium-87pxh 1/1 Running 0 62m
kube-system pod/cilium-operator-6f7f8cf67-5vfsx 1/1 Running 0 63m
kube-system pod/ck-storage-rawfile-csi-controller-0 2/2 Running 0 63m
kube-system pod/ck-storage-rawfile-csi-node-5vbjd 4/4 Running 0 63m
kube-system pod/coredns-598bfdf87d-qt2j4 1/1 Running 0 63m
kube-system pod/metrics-server-7ff9f4d4c9-jqb9x 1/1 Running 0 63m
metallb-system pod/metallb-controller-7bb5f6c9b4-pbzdb 1/1 Running 0 63m
metallb-system pod/metallb-speaker-dxg5x 1/1 Running 0 62m
observability pod/alertmanager-0 2/2 Running 0 13m
observability pod/catalogue-0 2/2 Running 0 13m
observability pod/grafana-0 3/3 Running 0 13m
observability pod/loki-0 3/3 Running 0 12m
observability pod/modeloperator-88fc49d74-tjcnm 1/1 Running 0 14m
observability pod/prometheus-0 2/2 Running 0 13m
observability pod/traefik-0 2/2 Running 0 13m
openstack pod/aodh-0 6/6 Running 0 18m
openstack pod/aodh-mysql-0 2/2 Running 0 22m
openstack pod/aodh-mysql-router-0 2/2 Running 0 19m
openstack pod/ceilometer-0 3/3 Running 0 22m
openstack pod/certificate-authority-0 1/1 Running 0 61m
openstack pod/cinder-0 3/3 Running 0 56m
openstack pod/cinder-ceph-0 2/2 Running 0 56m
openstack pod/cinder-ceph-mysql-router-0 2/2 Running 0 56m
openstack pod/cinder-mysql-0 2/2 Running 0 60m
openstack pod/cinder-mysql-router-0 2/2 Running 0 56m
openstack pod/glance-0 2/2 Running 0 56m
openstack pod/glance-mysql-0 2/2 Running 0 61m
openstack pod/glance-mysql-router-0 2/2 Running 0 56m
openstack pod/gnocchi-0 3/3 Running 0 19m
openstack pod/gnocchi-mysql-0 2/2 Running 0 22m
openstack pod/gnocchi-mysql-router-0 2/2 Running 0 19m
openstack pod/grafana-agent-0 2/2 Running 0 10m
openstack pod/heat-0 4/4 Running 0 29m
openstack pod/heat-mysql-0 2/2 Running 0 30m
openstack pod/heat-mysql-router-0 2/2 Running 0 29m
openstack pod/horizon-0 2/2 Running 0 55m
openstack pod/horizon-mysql-0 2/2 Running 0 61m
openstack pod/horizon-mysql-router-0 2/2 Running 0 55m
openstack pod/keystone-0 2/2 Running 0 55m
openstack pod/keystone-mysql-0 2/2 Running 0 61m
openstack pod/keystone-mysql-router-0 2/2 Running 0 55m
openstack pod/modeloperator-56b4d68fb7-tznnv 1/1 Running 0 62m
openstack pod/neutron-0 2/2 Running 0 55m
openstack pod/neutron-mysql-0 2/2 Running 0 60m
openstack pod/neutron-mysql-router-0 2/2 Running 0 55m
openstack pod/nova-0 5/5 Running 0 56m
openstack pod/nova-api-mysql-router-0 2/2 Running 0 56m
openstack pod/nova-cell-mysql-router-0 2/2 Running 0 56m
openstack pod/nova-mysql-0 2/2 Running 0 60m
openstack pod/nova-mysql-router-0 2/2 Running 0 56m
openstack pod/openstack-exporter-0 2/2 Running 0 22m
openstack pod/ovn-central-0 4/4 Running 0 61m
openstack pod/ovn-relay-0 2/2 Running 0 61m
openstack pod/placement-0 2/2 Running 0 55m
openstack pod/placement-mysql-0 2/2 Running 0 61m
openstack pod/placement-mysql-router-0 2/2 Running 0 55m
openstack pod/rabbitmq-0 2/2 Running 0 61m
openstack pod/traefik-0 2/2 Running 0 61m
openstack pod/traefik-public-0 2/2 Running 0 60m
openstack pod/vault-0 2/2 Running 0 27m

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 &amp;lt;none&amp;gt; 443/TCP 63m
kube-system service/ck-storage-rawfile-csi-controller ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 63m
kube-system service/ck-storage-rawfile-csi-node ClusterIP 10.152.183.237 &amp;lt;none&amp;gt; 9100/TCP 63m
kube-system service/coredns ClusterIP 10.152.183.37 &amp;lt;none&amp;gt; 53/UDP,53/TCP 63m
kube-system service/hubble-peer ClusterIP 10.152.183.23 &amp;lt;none&amp;gt; 443/TCP 63m
kube-system service/metrics-server ClusterIP 10.152.183.119 &amp;lt;none&amp;gt; 443/TCP 63m
metallb-system service/metallb-webhook-service ClusterIP 10.152.183.110 &amp;lt;none&amp;gt; 443/TCP 63m
observability service/alertmanager ClusterIP 10.152.183.36 &amp;lt;none&amp;gt; 9093/TCP,9094/TCP 13m
observability service/alertmanager-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 13m
observability service/catalogue ClusterIP 10.152.183.190 &amp;lt;none&amp;gt; 80/TCP 14m
observability service/catalogue-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 14m
observability service/grafana ClusterIP 10.152.183.170 &amp;lt;none&amp;gt; 3000/TCP 13m
observability service/grafana-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 13m
observability service/loki ClusterIP 10.152.183.201 &amp;lt;none&amp;gt; 3100/TCP 13m
observability service/loki-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 13m
observability service/modeloperator ClusterIP 10.152.183.59 &amp;lt;none&amp;gt; 17071/TCP 14m
observability service/prometheus ClusterIP 10.152.183.147 &amp;lt;none&amp;gt; 9090/TCP 13m
observability service/prometheus-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 13m
observability service/traefik ClusterIP 10.152.183.198 &amp;lt;none&amp;gt; 65535/TCP 13m
observability service/traefik-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 13m
observability service/traefik-lb LoadBalancer 10.152.183.60 172.16.1.205 80:30845/TCP,443:31176/TCP 13m
openstack service/aodh ClusterIP 10.152.183.238 &amp;lt;none&amp;gt; 8042/TCP 20m
openstack service/aodh-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 20m
openstack service/aodh-mysql ClusterIP 10.152.183.134 &amp;lt;none&amp;gt; 3306/TCP,33060/TCP 23m
openstack service/aodh-mysql-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 22m
openstack service/aodh-mysql-primary ClusterIP 10.152.183.176 &amp;lt;none&amp;gt; 3306/TCP 19m
openstack service/aodh-mysql-replicas ClusterIP 10.152.183.74 &amp;lt;none&amp;gt; 3306/TCP 19m
openstack service/aodh-mysql-router ClusterIP 10.152.183.200 &amp;lt;none&amp;gt; 6446/TCP,6447/TCP,65535/TCP 20m
openstack service/aodh-mysql-router-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 20m
openstack service/ceilometer ClusterIP 10.152.183.88 &amp;lt;none&amp;gt; 65535/TCP 23m
openstack service/ceilometer-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 22m
openstack service/certificate-authority ClusterIP 10.152.183.220 &amp;lt;none&amp;gt; 65535/TCP 61m
openstack service/certificate-authority-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 61m
openstack service/cinder ClusterIP 10.152.183.160 &amp;lt;none&amp;gt; 8776/TCP 58m
openstack service/cinder-ceph ClusterIP 10.152.183.202 &amp;lt;none&amp;gt; 65535/TCP 58m
openstack service/cinder-ceph-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/cinder-ceph-mysql-router ClusterIP 10.152.183.244 &amp;lt;none&amp;gt; 6446/TCP,6447/TCP,65535/TCP 57m
openstack service/cinder-ceph-mysql-router-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/cinder-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/cinder-mysql ClusterIP 10.152.183.77 &amp;lt;none&amp;gt; 3306/TCP,33060/TCP 61m
openstack service/cinder-mysql-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 61m
openstack service/cinder-mysql-primary ClusterIP 10.152.183.113 &amp;lt;none&amp;gt; 3306/TCP 55m
openstack service/cinder-mysql-replicas ClusterIP 10.152.183.127 &amp;lt;none&amp;gt; 3306/TCP 55m
openstack service/cinder-mysql-router ClusterIP 10.152.183.92 &amp;lt;none&amp;gt; 6446/TCP,6447/TCP,65535/TCP 57m
openstack service/cinder-mysql-router-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/glance ClusterIP 10.152.183.187 &amp;lt;none&amp;gt; 9292/TCP 58m
openstack service/glance-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/glance-mysql ClusterIP 10.152.183.163 &amp;lt;none&amp;gt; 3306/TCP,33060/TCP 61m
openstack service/glance-mysql-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 61m
openstack service/glance-mysql-primary ClusterIP 10.152.183.245 &amp;lt;none&amp;gt; 3306/TCP 55m
openstack service/glance-mysql-replicas ClusterIP 10.152.183.246 &amp;lt;none&amp;gt; 3306/TCP 55m
openstack service/glance-mysql-router ClusterIP 10.152.183.116 &amp;lt;none&amp;gt; 6446/TCP,6447/TCP,65535/TCP 58m
openstack service/glance-mysql-router-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/gnocchi ClusterIP 10.152.183.81 &amp;lt;none&amp;gt; 65535/TCP 20m
openstack service/gnocchi-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 20m
openstack service/gnocchi-mysql ClusterIP 10.152.183.35 &amp;lt;none&amp;gt; 3306/TCP,33060/TCP 23m
openstack service/gnocchi-mysql-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 22m
openstack service/gnocchi-mysql-primary ClusterIP 10.152.183.146 &amp;lt;none&amp;gt; 3306/TCP 19m
openstack service/gnocchi-mysql-replicas ClusterIP 10.152.183.153 &amp;lt;none&amp;gt; 3306/TCP 19m
openstack service/gnocchi-mysql-router ClusterIP 10.152.183.72 &amp;lt;none&amp;gt; 6446/TCP,6447/TCP,65535/TCP 20m
openstack service/gnocchi-mysql-router-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 20m
openstack service/grafana-agent ClusterIP 10.152.183.169 &amp;lt;none&amp;gt; 3500/TCP,3600/TCP 10m
openstack service/grafana-agent-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 10m
openstack service/heat ClusterIP 10.152.183.151 &amp;lt;none&amp;gt; 8004/TCP 30m
openstack service/heat-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 29m
openstack service/heat-mysql ClusterIP 10.152.183.222 &amp;lt;none&amp;gt; 3306/TCP,33060/TCP 30m
openstack service/heat-mysql-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 30m
openstack service/heat-mysql-primary ClusterIP 10.152.183.98 &amp;lt;none&amp;gt; 3306/TCP 29m
openstack service/heat-mysql-replicas ClusterIP 10.152.183.80 &amp;lt;none&amp;gt; 3306/TCP 29m
openstack service/heat-mysql-router ClusterIP 10.152.183.249 &amp;lt;none&amp;gt; 6446/TCP,6447/TCP,65535/TCP 30m
openstack service/heat-mysql-router-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 29m
openstack service/horizon ClusterIP 10.152.183.234 &amp;lt;none&amp;gt; 65535/TCP 57m
openstack service/horizon-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/horizon-mysql ClusterIP 10.152.183.131 &amp;lt;none&amp;gt; 3306/TCP,33060/TCP 61m
openstack service/horizon-mysql-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 61m
openstack service/horizon-mysql-primary ClusterIP 10.152.183.126 &amp;lt;none&amp;gt; 3306/TCP 55m
openstack service/horizon-mysql-replicas ClusterIP 10.152.183.145 &amp;lt;none&amp;gt; 3306/TCP 55m
openstack service/horizon-mysql-router ClusterIP 10.152.183.140 &amp;lt;none&amp;gt; 6446/TCP,6447/TCP,65535/TCP 57m
openstack service/horizon-mysql-router-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/keystone ClusterIP 10.152.183.63 &amp;lt;none&amp;gt; 5000/TCP 57m
openstack service/keystone-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/keystone-mysql ClusterIP 10.152.183.48 &amp;lt;none&amp;gt; 3306/TCP,33060/TCP 61m
openstack service/keystone-mysql-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 61m
openstack service/keystone-mysql-primary ClusterIP 10.152.183.159 &amp;lt;none&amp;gt; 3306/TCP 55m
openstack service/keystone-mysql-replicas ClusterIP 10.152.183.114 &amp;lt;none&amp;gt; 3306/TCP 55m
openstack service/keystone-mysql-router ClusterIP 10.152.183.108 &amp;lt;none&amp;gt; 6446/TCP,6447/TCP,65535/TCP 57m
openstack service/keystone-mysql-router-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/modeloperator ClusterIP 10.152.183.26 &amp;lt;none&amp;gt; 17071/TCP 62m
openstack service/neutron ClusterIP 10.152.183.212 &amp;lt;none&amp;gt; 9696/TCP 57m
openstack service/neutron-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/neutron-mysql ClusterIP 10.152.183.75 &amp;lt;none&amp;gt; 3306/TCP,33060/TCP 60m
openstack service/neutron-mysql-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 60m
openstack service/neutron-mysql-primary ClusterIP 10.152.183.55 &amp;lt;none&amp;gt; 3306/TCP 55m
openstack service/neutron-mysql-replicas ClusterIP 10.152.183.34 &amp;lt;none&amp;gt; 3306/TCP 55m
openstack service/neutron-mysql-router ClusterIP 10.152.183.197 &amp;lt;none&amp;gt; 6446/TCP,6447/TCP,65535/TCP 57m
openstack service/neutron-mysql-router-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/nova ClusterIP 10.152.183.104 &amp;lt;none&amp;gt; 8774/TCP 58m
openstack service/nova-api-mysql-router ClusterIP 10.152.183.189 &amp;lt;none&amp;gt; 6446/TCP,6447/TCP,65535/TCP 57m
openstack service/nova-api-mysql-router-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/nova-cell-mysql-router ClusterIP 10.152.183.178 &amp;lt;none&amp;gt; 6446/TCP,6447/TCP,65535/TCP 58m
openstack service/nova-cell-mysql-router-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/nova-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/nova-mysql ClusterIP 10.152.183.24 &amp;lt;none&amp;gt; 3306/TCP,33060/TCP 61m
openstack service/nova-mysql-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 61m
openstack service/nova-mysql-primary ClusterIP 10.152.183.130 &amp;lt;none&amp;gt; 3306/TCP 56m
openstack service/nova-mysql-replicas ClusterIP 10.152.183.186 &amp;lt;none&amp;gt; 3306/TCP 56m
openstack service/nova-mysql-router ClusterIP 10.152.183.152 &amp;lt;none&amp;gt; 6446/TCP,6447/TCP,65535/TCP 58m
openstack service/nova-mysql-router-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/openstack-exporter ClusterIP 10.152.183.100 &amp;lt;none&amp;gt; 9180/TCP 23m
openstack service/openstack-exporter-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 22m
openstack service/ovn-central ClusterIP 10.152.183.194 &amp;lt;none&amp;gt; 6641/TCP,6642/TCP 62m
openstack service/ovn-central-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 61m
openstack service/ovn-relay LoadBalancer 10.152.183.149 172.16.1.201 6642:32293/TCP 62m
openstack service/ovn-relay-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 62m
openstack service/placement ClusterIP 10.152.183.199 &amp;lt;none&amp;gt; 8778/TCP 57m
openstack service/placement-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/placement-mysql ClusterIP 10.152.183.83 &amp;lt;none&amp;gt; 3306/TCP,33060/TCP 61m
openstack service/placement-mysql-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 61m
openstack service/placement-mysql-primary ClusterIP 10.152.183.188 &amp;lt;none&amp;gt; 3306/TCP 55m
openstack service/placement-mysql-replicas ClusterIP 10.152.183.165 &amp;lt;none&amp;gt; 3306/TCP 55m
openstack service/placement-mysql-router ClusterIP 10.152.183.248 &amp;lt;none&amp;gt; 6446/TCP,6447/TCP,65535/TCP 57m
openstack service/placement-mysql-router-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 57m
openstack service/rabbitmq LoadBalancer 10.152.183.150 172.16.1.202 5672:31615/TCP,15672:31040/TCP 61m
openstack service/rabbitmq-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 61m
openstack service/traefik ClusterIP 10.152.183.125 &amp;lt;none&amp;gt; 65535/TCP 61m
openstack service/traefik-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 61m
openstack service/traefik-lb LoadBalancer 10.152.183.221 172.16.1.203 80:32485/TCP,443:31534/TCP 61m
openstack service/traefik-public ClusterIP 10.152.183.54 &amp;lt;none&amp;gt; 65535/TCP 61m
openstack service/traefik-public-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 61m
openstack service/traefik-public-lb LoadBalancer 10.152.183.115 172.16.1.204 80:30599/TCP,443:30927/TCP 60m
openstack service/vault ClusterIP 10.152.183.78 &amp;lt;none&amp;gt; 8200/TCP 27m
openstack service/vault-endpoints ClusterIP None &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; 27m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Une fois que MicroStack a été déployé, vous avez la possibilité de gérer les charges de travail manuellement (c’est-à-dire via la CLI d’openstack) ou avec Juju :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://canonical.com/microstack/docs/juju-workloads" rel="noopener noreferrer"&gt;Manage workloads with Juju | Canonical&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mais je vais utiliser Pulumi. Pulumi est une plateforme moderne d’infrastructure en tant que code (IaC) qui permet aux utilisateurs de gérer et de fournir une infrastructure en nuage à l’aide de langages de programmation (notamment TypeScript, JavaScript, Python, Go, .NET, Java et YAML).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.pulumi.com/" rel="noopener noreferrer"&gt;Pulumi - Infrastructure as Code, Secrets Management, and AI&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Pulumi adopte une approche déclarative pour définir l’infrastructure. Les utilisateurs spécifient l’état souhaité de leur infrastructure, et Pulumi gère la création, la mise à jour et la suppression des ressources pour atteindre cet état. Cette approche est plus intuitive que la programmation impérative, où chaque étape pour atteindre l’état souhaité doit être explicitement décrite.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;L’interface de ligne de commande de Pulumi est le principal outil de gestion et de déploiement de l’infrastructure et je l’installe à cette étape :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://get.pulumi.com | sh
&lt;span class="o"&gt;===&lt;/span&gt; Installing Pulumi 3.144.1 &lt;span class="o"&gt;===&lt;/span&gt;
+ Downloading https://github.com/pulumi/pulumi/releases/download/v3.144.1/pulumi-v3.144.1-linux-x64.tar.gz...
  % Total % Received % Xferd Average Speed Time Time Time Current
                                 Dload Upload Total Spent Left Speed
  0 0 0 0 0 0 0 0 &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- 0
100 80.1M 100 80.1M 0 0 103M 0 &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- 433M
+ Extracting to /home/ubuntu/.pulumi/bin
+ Adding /home/ubuntu/.pulumi/bin to &lt;span class="nv"&gt;$PATH&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; /home/ubuntu/.bashrc

&lt;span class="o"&gt;===&lt;/span&gt; Pulumi is now installed! 🍹 &lt;span class="o"&gt;===&lt;/span&gt;
+ Please restart your shell or add /home/ubuntu/.pulumi/bin to your &lt;span class="nv"&gt;$PATH&lt;/span&gt;
+ Get started with Pulumi: https://www.pulumi.com/docs/quickstart
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;source&lt;/span&gt; .bashrc
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;pulumi
Pulumi - Modern Infrastructure as Code

To begin working with Pulumi, run the &lt;span class="sb"&gt;`&lt;/span&gt;pulumi new&lt;span class="sb"&gt;`&lt;/span&gt; &lt;span class="nb"&gt;command&lt;/span&gt;:

    &lt;span class="nv"&gt;$ &lt;/span&gt;pulumi new

This will prompt you to create a new project &lt;span class="k"&gt;for &lt;/span&gt;your cloud and language of choice.

The most common commands from there are:

    - pulumi up : Deploy code and/or resource changes
    - pulumi stack : Manage instances of your project
    - pulumi config : Alter your stack&lt;span class="s1"&gt;'s configuration or secrets
    - pulumi destroy : Tear down your stack'&lt;/span&gt;s resources entirely

For more information, please visit the project page: https://www.pulumi.com/docs/

Usage:
  pulumi &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;command&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;

Stack Management Commands:
  new Create a new Pulumi project
  config Manage configuration
  stack Manage stacks and view stack state
  console Opens the current stack &lt;span class="k"&gt;in &lt;/span&gt;the Pulumi Console
  import Import resources into an existing stack
  refresh Refresh the resources &lt;span class="k"&gt;in &lt;/span&gt;a stack
  state Edit the current stack&lt;span class="s1"&gt;'s state
  install Install packages and plugins for the current program or policy pack.

Deployment Commands:
  up Create or update the resources in a stack
  destroy Destroy all existing resources in the stack
  preview Show a preview of updates to a stack'&lt;/span&gt;s resources
  cancel Cancel a stack&lt;span class="s1"&gt;'s currently running update, if any

Environment Commands:
  env Manage environments

Pulumi Cloud Commands:
  login Log in to the Pulumi Cloud
  logout Log out of the Pulumi Cloud
  whoami Display the current logged-in user
  org Manage Organization configuration

Policy Management Commands:
  policy Manage resource policies

Plugin Commands:
  plugin Manage language and resource provider plugins
  schema Analyze package schemas
  package Work with Pulumi packages

Other Commands:
  version Print Pulumi'&lt;/span&gt;s version number
  about Print information about the Pulumi environment.
  gen-completion Generate completion scripts &lt;span class="k"&gt;for &lt;/span&gt;the Pulumi CLI

Experimental Commands:
  convert Convert Pulumi programs from a supported &lt;span class="nb"&gt;source &lt;/span&gt;program into other supported languages
  watch Continuously update the resources &lt;span class="k"&gt;in &lt;/span&gt;a stack
  logs Show aggregated resource logs &lt;span class="k"&gt;for &lt;/span&gt;a stack

Flags:
      &lt;span class="nt"&gt;--color&lt;/span&gt; string Colorize output. Choices are: always, never, raw, auto &lt;span class="o"&gt;(&lt;/span&gt;default &lt;span class="s2"&gt;"auto"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="nt"&gt;-C&lt;/span&gt;, &lt;span class="nt"&gt;--cwd&lt;/span&gt; string Run pulumi as &lt;span class="k"&gt;if &lt;/span&gt;it had been started &lt;span class="k"&gt;in &lt;/span&gt;another directory
      &lt;span class="nt"&gt;--disable-integrity-checking&lt;/span&gt; Disable integrity checking of checkpoint files
  &lt;span class="nt"&gt;-e&lt;/span&gt;, &lt;span class="nt"&gt;--emoji&lt;/span&gt; Enable emojis &lt;span class="k"&gt;in &lt;/span&gt;the output
  &lt;span class="nt"&gt;-Q&lt;/span&gt;, &lt;span class="nt"&gt;--fully-qualify-stack-names&lt;/span&gt; Show fully-qualified stack names
  &lt;span class="nt"&gt;-h&lt;/span&gt;, &lt;span class="nt"&gt;--help&lt;/span&gt; &lt;span class="nb"&gt;help &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;pulumi
      &lt;span class="nt"&gt;--logflow&lt;/span&gt; Flow log settings to child processes &lt;span class="o"&gt;(&lt;/span&gt;like plugins&lt;span class="o"&gt;)&lt;/span&gt;
      &lt;span class="nt"&gt;--logtostderr&lt;/span&gt; Log to stderr instead of to files
      &lt;span class="nt"&gt;--memprofilerate&lt;/span&gt; int Enable more precise &lt;span class="o"&gt;(&lt;/span&gt;and expensive&lt;span class="o"&gt;)&lt;/span&gt; memory allocation profiles by setting runtime.MemProfileRate
      &lt;span class="nt"&gt;--non-interactive&lt;/span&gt; Disable interactive mode &lt;span class="k"&gt;for &lt;/span&gt;all commands
      &lt;span class="nt"&gt;--profiling&lt;/span&gt; string Emit CPU and memory profiles and an execution trace to &lt;span class="s1"&gt;'[filename].[pid].{cpu,mem,trace}'&lt;/span&gt;, respectively
      &lt;span class="nt"&gt;--tracing&lt;/span&gt; file: Emit tracing to the specified endpoint. Use the file: scheme to write tracing data to a &lt;span class="nb"&gt;local &lt;/span&gt;file
  &lt;span class="nt"&gt;-v&lt;/span&gt;, &lt;span class="nt"&gt;--verbose&lt;/span&gt; int Enable verbose logging &lt;span class="o"&gt;(&lt;/span&gt;e.g., &lt;span class="nv"&gt;v&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; anything &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;3 is very verbose

Use &lt;span class="sb"&gt;`&lt;/span&gt;pulumi &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;command&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="nt"&gt;--help&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;more information about a command.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;J’utilise le système de fichiers de la machine afin de stocker les fichiers d’état localement :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(base) ubuntu@microstack:~$ pulumi login --local
Logged in to microstack as ubuntu (file://~)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;J’utilise Miniconda pour rapidement disposer d’un environnement Python prêt à l’emploi :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.anaconda.com/miniconda/" rel="noopener noreferrer"&gt;Miniconda - Anaconda documentation&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; ~/miniconda3
ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh &lt;span class="nt"&gt;-O&lt;/span&gt; ~/miniconda3/miniconda.sh
ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;bash ~/miniconda3/miniconda.sh &lt;span class="nt"&gt;-b&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; ~/miniconda3
ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;rm&lt;/span&gt; ~/miniconda3/miniconda.sh

&lt;span class="nv"&gt;PREFIX&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/home/ubuntu/miniconda3
Unpacking payload ...

Installing base environment...

Preparing transaction: ...working... &lt;span class="k"&gt;done
&lt;/span&gt;Executing transaction: ...working... &lt;span class="k"&gt;done
&lt;/span&gt;installation finished.
ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;source&lt;/span&gt; ~/miniconda3/bin/activate
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;conda init &lt;span class="nt"&gt;--all&lt;/span&gt;
no change /home/ubuntu/miniconda3/condabin/conda
no change /home/ubuntu/miniconda3/bin/conda
no change /home/ubuntu/miniconda3/bin/conda-env
no change /home/ubuntu/miniconda3/bin/activate
no change /home/ubuntu/miniconda3/bin/deactivate
no change /home/ubuntu/miniconda3/etc/profile.d/conda.sh
no change /home/ubuntu/miniconda3/etc/fish/conf.d/conda.fish
no change /home/ubuntu/miniconda3/shell/condabin/Conda.psm1
no change /home/ubuntu/miniconda3/shell/condabin/conda-hook.ps1
no change /home/ubuntu/miniconda3/lib/python3.12/site-packages/xontrib/conda.xsh
no change /home/ubuntu/miniconda3/etc/profile.d/conda.csh
modified /home/ubuntu/.bashrc
modified /home/ubuntu/.zshrc
modified /home/ubuntu/.config/fish/config.fish
modified /home/ubuntu/.xonshrc
modified /home/ubuntu/.tcshrc

&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; For changes to take effect, close and re-open your current shell. &amp;lt;&lt;span class="o"&gt;==&lt;/span&gt;

&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;source&lt;/span&gt; .bashrc
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;type &lt;/span&gt;pip
pip is /home/ubuntu/miniconda3/bin/pip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Création d’un environnement avec le fournisseur OpenStack et son template en Python pour Pulumi :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;mkdir test&lt;/span&gt;
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd test&lt;/span&gt;
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~/test&lt;span class="nv"&gt;$ &lt;/span&gt;pulumi new openstack-python
This &lt;span class="nb"&gt;command &lt;/span&gt;will walk you through creating a new Pulumi project.

Enter a value or leave blank to accept the &lt;span class="o"&gt;(&lt;/span&gt;default&lt;span class="o"&gt;)&lt;/span&gt;, and press &amp;lt;ENTER&amp;gt;.
Press ^C at any &lt;span class="nb"&gt;time &lt;/span&gt;to quit.

Project name &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;:  
Project description &lt;span class="o"&gt;(&lt;/span&gt;A minimal OpenStack Python Pulumi program&lt;span class="o"&gt;)&lt;/span&gt;:  
Created project &lt;span class="s1"&gt;'test'&lt;/span&gt;

Stack name &lt;span class="o"&gt;(&lt;/span&gt;dev&lt;span class="o"&gt;)&lt;/span&gt;:  
Enter your passphrase to protect config/secrets:  
Re-enter your passphrase to confirm:  
Created stack &lt;span class="s1"&gt;'dev'&lt;/span&gt;

The toolchain to use &lt;span class="k"&gt;for &lt;/span&gt;installing dependencies and running the program pip
Installing dependencies...

Creating virtual environment...
Finished creating virtual environment
Updating pip, setuptools, and wheel &lt;span class="k"&gt;in &lt;/span&gt;virtual environment...
Requirement already satisfied: pip &lt;span class="k"&gt;in&lt;/span&gt; ./venv/lib/python3.12/site-packages &lt;span class="o"&gt;(&lt;/span&gt;24.3.1&lt;span class="o"&gt;)&lt;/span&gt;
Collecting setuptools
  Downloading setuptools-75.6.0-py3-none-any.whl.metadata &lt;span class="o"&gt;(&lt;/span&gt;6.7 kB&lt;span class="o"&gt;)&lt;/span&gt;
Collecting wheel
  Downloading wheel-0.45.1-py3-none-any.whl.metadata &lt;span class="o"&gt;(&lt;/span&gt;2.3 kB&lt;span class="o"&gt;)&lt;/span&gt;
Downloading setuptools-75.6.0-py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;1.2 MB&lt;span class="o"&gt;)&lt;/span&gt;
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 29.2 MB/s eta 0:00:00
Downloading wheel-0.45.1-py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;72 kB&lt;span class="o"&gt;)&lt;/span&gt;
Installing collected packages: wheel, setuptools
Successfully installed setuptools-75.6.0 wheel-0.45.1
Finished updating
Installing dependencies &lt;span class="k"&gt;in &lt;/span&gt;virtual environment...
Collecting pulumi&amp;lt;4.0.0,&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;3.0.0 &lt;span class="o"&gt;(&lt;/span&gt;from &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt &lt;span class="o"&gt;(&lt;/span&gt;line 1&lt;span class="o"&gt;))&lt;/span&gt;
  Downloading pulumi-3.144.1-py3-none-any.whl.metadata &lt;span class="o"&gt;(&lt;/span&gt;12 kB&lt;span class="o"&gt;)&lt;/span&gt;
Collecting pulumi-openstack&amp;lt;4.0.0,&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;3.0.0 &lt;span class="o"&gt;(&lt;/span&gt;from &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt &lt;span class="o"&gt;(&lt;/span&gt;line 2&lt;span class="o"&gt;))&lt;/span&gt;
  Downloading pulumi_openstack-3.15.2-py3-none-any.whl.metadata &lt;span class="o"&gt;(&lt;/span&gt;9.2 kB&lt;span class="o"&gt;)&lt;/span&gt;
Collecting protobuf~&lt;span class="o"&gt;=&lt;/span&gt;4.21 &lt;span class="o"&gt;(&lt;/span&gt;from pulumi&amp;lt;4.0.0,&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;3.0.0-&amp;gt;-r requirements.txt &lt;span class="o"&gt;(&lt;/span&gt;line 1&lt;span class="o"&gt;))&lt;/span&gt;
  Downloading protobuf-4.25.5-cp37-abi3-manylinux2014_x86_64.whl.metadata &lt;span class="o"&gt;(&lt;/span&gt;541 bytes&lt;span class="o"&gt;)&lt;/span&gt;
Collecting grpcio~&lt;span class="o"&gt;=&lt;/span&gt;1.66.2 &lt;span class="o"&gt;(&lt;/span&gt;from pulumi&amp;lt;4.0.0,&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;3.0.0-&amp;gt;-r requirements.txt &lt;span class="o"&gt;(&lt;/span&gt;line 1&lt;span class="o"&gt;))&lt;/span&gt;
  Downloading grpcio-1.66.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata &lt;span class="o"&gt;(&lt;/span&gt;3.9 kB&lt;span class="o"&gt;)&lt;/span&gt;
Collecting dill~&lt;span class="o"&gt;=&lt;/span&gt;0.3 &lt;span class="o"&gt;(&lt;/span&gt;from pulumi&amp;lt;4.0.0,&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;3.0.0-&amp;gt;-r requirements.txt &lt;span class="o"&gt;(&lt;/span&gt;line 1&lt;span class="o"&gt;))&lt;/span&gt;
  Downloading dill-0.3.9-py3-none-any.whl.metadata &lt;span class="o"&gt;(&lt;/span&gt;10 kB&lt;span class="o"&gt;)&lt;/span&gt;
Collecting six~&lt;span class="o"&gt;=&lt;/span&gt;1.12 &lt;span class="o"&gt;(&lt;/span&gt;from pulumi&amp;lt;4.0.0,&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;3.0.0-&amp;gt;-r requirements.txt &lt;span class="o"&gt;(&lt;/span&gt;line 1&lt;span class="o"&gt;))&lt;/span&gt;
  Downloading six-1.17.0-py2.py3-none-any.whl.metadata &lt;span class="o"&gt;(&lt;/span&gt;1.7 kB&lt;span class="o"&gt;)&lt;/span&gt;
Collecting semver~&lt;span class="o"&gt;=&lt;/span&gt;2.13 &lt;span class="o"&gt;(&lt;/span&gt;from pulumi&amp;lt;4.0.0,&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;3.0.0-&amp;gt;-r requirements.txt &lt;span class="o"&gt;(&lt;/span&gt;line 1&lt;span class="o"&gt;))&lt;/span&gt;
  Downloading semver-2.13.0-py2.py3-none-any.whl.metadata &lt;span class="o"&gt;(&lt;/span&gt;5.0 kB&lt;span class="o"&gt;)&lt;/span&gt;
Collecting pyyaml~&lt;span class="o"&gt;=&lt;/span&gt;6.0 &lt;span class="o"&gt;(&lt;/span&gt;from pulumi&amp;lt;4.0.0,&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;3.0.0-&amp;gt;-r requirements.txt &lt;span class="o"&gt;(&lt;/span&gt;line 1&lt;span class="o"&gt;))&lt;/span&gt;
  Downloading PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata &lt;span class="o"&gt;(&lt;/span&gt;2.1 kB&lt;span class="o"&gt;)&lt;/span&gt;
Collecting debugpy~&lt;span class="o"&gt;=&lt;/span&gt;1.8.7 &lt;span class="o"&gt;(&lt;/span&gt;from pulumi&amp;lt;4.0.0,&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;3.0.0-&amp;gt;-r requirements.txt &lt;span class="o"&gt;(&lt;/span&gt;line 1&lt;span class="o"&gt;))&lt;/span&gt;
  Downloading debugpy-1.8.11-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata &lt;span class="o"&gt;(&lt;/span&gt;1.1 kB&lt;span class="o"&gt;)&lt;/span&gt;
Collecting parver&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;0.2.1 &lt;span class="o"&gt;(&lt;/span&gt;from pulumi-openstack&amp;lt;4.0.0,&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;3.0.0-&amp;gt;-r requirements.txt &lt;span class="o"&gt;(&lt;/span&gt;line 2&lt;span class="o"&gt;))&lt;/span&gt;
  Downloading parver-0.5-py3-none-any.whl.metadata &lt;span class="o"&gt;(&lt;/span&gt;2.7 kB&lt;span class="o"&gt;)&lt;/span&gt;
Collecting arpeggio&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;1.7 &lt;span class="o"&gt;(&lt;/span&gt;from parver&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;0.2.1-&amp;gt;pulumi-openstack&amp;lt;4.0.0,&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;3.0.0-&amp;gt;-r requirements.txt &lt;span class="o"&gt;(&lt;/span&gt;line 2&lt;span class="o"&gt;))&lt;/span&gt;
  Downloading Arpeggio-2.0.2-py2.py3-none-any.whl.metadata &lt;span class="o"&gt;(&lt;/span&gt;2.4 kB&lt;span class="o"&gt;)&lt;/span&gt;
Collecting attrs&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;19.2 &lt;span class="o"&gt;(&lt;/span&gt;from parver&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;0.2.1-&amp;gt;pulumi-openstack&amp;lt;4.0.0,&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;3.0.0-&amp;gt;-r requirements.txt &lt;span class="o"&gt;(&lt;/span&gt;line 2&lt;span class="o"&gt;))&lt;/span&gt;
  Downloading attrs-24.3.0-py3-none-any.whl.metadata &lt;span class="o"&gt;(&lt;/span&gt;11 kB&lt;span class="o"&gt;)&lt;/span&gt;
Downloading pulumi-3.144.1-py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;294 kB&lt;span class="o"&gt;)&lt;/span&gt;
Downloading pulumi_openstack-3.15.2-py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;551 kB&lt;span class="o"&gt;)&lt;/span&gt;
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 551.5/551.5 kB 19.0 MB/s eta 0:00:00
Downloading debugpy-1.8.11-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl &lt;span class="o"&gt;(&lt;/span&gt;4.2 MB&lt;span class="o"&gt;)&lt;/span&gt;
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.2/4.2 MB 99.4 MB/s eta 0:00:00
Downloading dill-0.3.9-py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;119 kB&lt;span class="o"&gt;)&lt;/span&gt;
Downloading grpcio-1.66.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl &lt;span class="o"&gt;(&lt;/span&gt;5.8 MB&lt;span class="o"&gt;)&lt;/span&gt;
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.8/5.8 MB 10.8 MB/s eta 0:00:00
Downloading parver-0.5-py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;15 kB&lt;span class="o"&gt;)&lt;/span&gt;
Downloading protobuf-4.25.5-cp37-abi3-manylinux2014_x86_64.whl &lt;span class="o"&gt;(&lt;/span&gt;294 kB&lt;span class="o"&gt;)&lt;/span&gt;
Downloading PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl &lt;span class="o"&gt;(&lt;/span&gt;767 kB&lt;span class="o"&gt;)&lt;/span&gt;
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 767.5/767.5 kB 120.7 MB/s eta 0:00:00
Downloading semver-2.13.0-py2.py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;12 kB&lt;span class="o"&gt;)&lt;/span&gt;
Downloading six-1.17.0-py2.py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;11 kB&lt;span class="o"&gt;)&lt;/span&gt;
Downloading Arpeggio-2.0.2-py2.py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;55 kB&lt;span class="o"&gt;)&lt;/span&gt;
Downloading attrs-24.3.0-py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;63 kB&lt;span class="o"&gt;)&lt;/span&gt;
Installing collected packages: arpeggio, six, semver, pyyaml, protobuf, grpcio, dill, debugpy, attrs, pulumi, parver, pulumi-openstack
Successfully installed arpeggio-2.0.2 attrs-24.3.0 debugpy-1.8.11 dill-0.3.9 grpcio-1.66.2 parver-0.5 protobuf-4.25.5 pulumi-3.144.1 pulumi-openstack-3.15.2 pyyaml-6.0.2 semver-2.13.0 six-1.17.0
Finished installing dependencies
Finished installing dependencies

Your new project is ready to go! 

To perform an initial deployment, run &lt;span class="sb"&gt;`&lt;/span&gt;pulumi up&lt;span class="sb"&gt;`&lt;/span&gt;


&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~/test&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls
&lt;/span&gt;Pulumi.dev.yaml Pulumi.yaml __main__.py requirements.txt venv
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~/test&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;Pulumi.yaml 
name: &lt;span class="nb"&gt;test
&lt;/span&gt;description: A minimal OpenStack Python Pulumi program
runtime:
  name: python
  options:
    toolchain: pip
    virtualenv: venv
config:
  pulumi:tags:
    value:
      pulumi:template: openstack-python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Je modifie le principal fichier Python pour déployer une stack avec une nouvelle instance Ubuntu de test :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;base&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;ubuntu&lt;/span&gt;&lt;span class="nd"&gt;@microstack&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;~/&lt;/span&gt;&lt;span class="n"&gt;test&lt;/span&gt;&lt;span class="err"&gt;$&lt;/span&gt; &lt;span class="n"&gt;cat&lt;/span&gt; &lt;span class="n"&gt;__main__&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;py&lt;/span&gt; 
&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;An OpenStack Python Pulumi program&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pulumi&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pulumi_openstack&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt;

&lt;span class="c1"&gt;# Create an OpenStack resource (Compute Instance)
&lt;/span&gt;&lt;span class="n"&gt;instance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Instance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;test-pulumi&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;flavor_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;m1.small&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;key_pair&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sunbeam&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;                        
        &lt;span class="n"&gt;image_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ubuntu&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Export the IP of the instance
&lt;/span&gt;&lt;span class="n"&gt;pulumi&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;export&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;instance_ip&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;access_ip_v4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Et je déploie ma stack avec Pulumi en utilisant les variables d’environnement du tenant de démonstration :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~/test&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;source&lt;/span&gt; ../demo-openrc 
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~/test&lt;span class="nv"&gt;$ &lt;/span&gt;pulumi up
Enter your passphrase to unlock config/secrets
    &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;set &lt;/span&gt;PULUMI_CONFIG_PASSPHRASE or PULUMI_CONFIG_PASSPHRASE_FILE to remember&lt;span class="o"&gt;)&lt;/span&gt;:  
Enter your passphrase to unlock config/secrets
Previewing update &lt;span class="o"&gt;(&lt;/span&gt;dev&lt;span class="o"&gt;)&lt;/span&gt;:
     Type Name Plan Info
 + pulumi:pulumi:Stack test-dev create 1 warning
 + └─ openstack:compute:Instance test-pulumi create     

Diagnostics:
  pulumi:pulumi:Stack &lt;span class="o"&gt;(&lt;/span&gt;test-dev&lt;span class="o"&gt;)&lt;/span&gt;:
    warning: provider config warning: Users not using loadbalancer resources can ignore this message. Support &lt;span class="k"&gt;for &lt;/span&gt;neutron-lbaas will be removed on next major release. Octavia will be the only supported method &lt;span class="k"&gt;for &lt;/span&gt;loadbalancer resources. Users using octavia will have to remove &lt;span class="s1"&gt;'use_octavia'&lt;/span&gt; option from the provider configuration block. Users using neutron-lbaas will have to migrate/upgrade to octavia.

Outputs:
    instance_ip: output&amp;lt;string&amp;gt;

Resources:
    + 2 to create

Do you want to perform this update? &lt;span class="nb"&gt;yes
&lt;/span&gt;Updating &lt;span class="o"&gt;(&lt;/span&gt;dev&lt;span class="o"&gt;)&lt;/span&gt;:
     Type Name Status Info
 + pulumi:pulumi:Stack test-dev created &lt;span class="o"&gt;(&lt;/span&gt;15s&lt;span class="o"&gt;)&lt;/span&gt; 1 warning
 + └─ openstack:compute:Instance test-pulumi created &lt;span class="o"&gt;(&lt;/span&gt;15s&lt;span class="o"&gt;)&lt;/span&gt;     

Diagnostics:
  pulumi:pulumi:Stack &lt;span class="o"&gt;(&lt;/span&gt;test-dev&lt;span class="o"&gt;)&lt;/span&gt;:
    warning: provider config warning: Users not using loadbalancer resources can ignore this message. Support &lt;span class="k"&gt;for &lt;/span&gt;neutron-lbaas will be removed on next major release. Octavia will be the only supported method &lt;span class="k"&gt;for &lt;/span&gt;loadbalancer resources. Users using octavia will have to remove &lt;span class="s1"&gt;'use_octavia'&lt;/span&gt; option from the provider configuration block. Users using neutron-lbaas will have to migrate/upgrade to octavia.

Outputs:
    instance_ip: &lt;span class="s2"&gt;"192.168.0.227"&lt;/span&gt;

Resources:
    + 2 created

Duration: 16s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;L’instance est créée …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foym3ufblhg3fzsczt884.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foym3ufblhg3fzsczt884.png" width="800" height="317"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~/test&lt;span class="nv"&gt;$ &lt;/span&gt;openstack server list
+--------------------------------------+---------------------+--------+-----------------------------------------+--------+----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+---------------------+--------+-----------------------------------------+--------+----------+
| 6ec4c753-92c5-4221-a76d-8045638efd32 | test-pulumi-54bdcae | ACTIVE | demo-network&lt;span class="o"&gt;=&lt;/span&gt;192.168.0.227 | ubuntu | m1.small |
| efe46971-56f4-4da4-9c6e-eebee2795b72 | instance1 | ACTIVE | demo-network&lt;span class="o"&gt;=&lt;/span&gt;172.16.2.31, 192.168.0.166 | ubuntu | m1.tiny |
+--------------------------------------+---------------------+--------+-----------------------------------------+--------+----------+
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~/test&lt;span class="nv"&gt;$ &lt;/span&gt;openstack server show test-pulumi-54bdcae &lt;span class="nt"&gt;--fit&lt;/span&gt;
+-------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hostname | test-pulumi-54bdcae |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | None |
| OS-EXT-SRV-ATTR:kernel_id | None |
| OS-EXT-SRV-ATTR:launch_index | None |
| OS-EXT-SRV-ATTR:ramdisk_id | None |
| OS-EXT-SRV-ATTR:reservation_id | None |
| OS-EXT-SRV-ATTR:root_device_name | None |
| OS-EXT-SRV-ATTR:user_data | None |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2024-12-24T10:36:58.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | demo-network&lt;span class="o"&gt;=&lt;/span&gt;192.168.0.227 |
| config_drive | |
| created | 2024-12-24T10:36:53Z |
| description | test-pulumi-54bdcae |
| flavor | &lt;span class="nv"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;, &lt;span class="nv"&gt;disk&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'30'&lt;/span&gt;, &lt;span class="nv"&gt;ephemeral&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'0'&lt;/span&gt;, , &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'m1.small'&lt;/span&gt;, &lt;span class="nv"&gt;is_disabled&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;, &lt;span class="nv"&gt;is_public&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'True'&lt;/span&gt;, &lt;span class="nv"&gt;location&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;, &lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'m1.small'&lt;/span&gt;, &lt;span class="nv"&gt;original_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'m1.small'&lt;/span&gt;, |
| | &lt;span class="nv"&gt;ram&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'2048'&lt;/span&gt;, &lt;span class="nv"&gt;rxtx_factor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;, &lt;span class="nv"&gt;swap&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'0'&lt;/span&gt;, &lt;span class="nv"&gt;vcpus&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'1'&lt;/span&gt; |
| hostId | 021ebc639163d77a5eb8018996d0b8aad50066a8552682313f3f293f |
| host_status | None |
| &lt;span class="nb"&gt;id&lt;/span&gt; | 6ec4c753-92c5-4221-a76d-8045638efd32 |
| image | ubuntu &lt;span class="o"&gt;(&lt;/span&gt;ff3ccb3b-f44f-4b50-a030-20267c302d75&lt;span class="o"&gt;)&lt;/span&gt; |
| key_name | sunbeam |
| locked | False |
| locked_reason | None |
| name | test-pulumi-54bdcae |
| progress | 0 |
| project_id | 8b373f844efd47c8b38c4f1bcdcfba2a |
| properties | |
| security_groups | &lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'default'&lt;/span&gt; |
| server_groups | &lt;span class="o"&gt;[]&lt;/span&gt; |
| status | ACTIVE |
| tags | |
| trusted_image_certificates | None |
| updated | 2024-12-24T10:36:59Z |
| user_id | 114709b3342c45f295d116c63c51884a |
| volumes_attached | |
+-------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Et on s’y connecte en y adjoignant une adresse IP flottante :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~&lt;span class="nv"&gt;$ &lt;/span&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; snap/openstack/637/sunbeam ubuntu@172.16.2.124
Welcome to Ubuntu 22.04.5 LTS &lt;span class="o"&gt;(&lt;/span&gt;GNU/Linux 5.15.0-127-generic x86_64&lt;span class="o"&gt;)&lt;/span&gt;

 &lt;span class="k"&gt;*&lt;/span&gt; Documentation: https://help.ubuntu.com
 &lt;span class="k"&gt;*&lt;/span&gt; Management: https://landscape.canonical.com
 &lt;span class="k"&gt;*&lt;/span&gt; Support: https://ubuntu.com/pro

 System information as of Tue Dec 24 10:42:48 UTC 2024

  System load: 0.0 Processes: 89
  Usage of /: 5.0% of 28.89GB Users logged &lt;span class="k"&gt;in&lt;/span&gt;: 0
  Memory usage: 9% IPv4 address &lt;span class="k"&gt;for &lt;/span&gt;ens3: 192.168.0.227
  Swap usage: 0%

Expanded Security Maintenance &lt;span class="k"&gt;for &lt;/span&gt;Applications is not enabled.

0 updates can be applied immediately.

Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: &lt;span class="nb"&gt;sudo &lt;/span&gt;pro status

The list of available updates is more than a week old.
To check &lt;span class="k"&gt;for &lt;/span&gt;new updates run: &lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
New release &lt;span class="s1"&gt;'24.04.1 LTS'&lt;/span&gt; available.
Run &lt;span class="s1"&gt;'do-release-upgrade'&lt;/span&gt; to upgrade to it.

Last login: Tue Dec 24 10:42:48 2024 from 172.16.2.1
To run a &lt;span class="nb"&gt;command &lt;/span&gt;as administrator &lt;span class="o"&gt;(&lt;/span&gt;user &lt;span class="s2"&gt;"root"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;, use &lt;span class="s2"&gt;"sudo &amp;lt;command&amp;gt;"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
See &lt;span class="s2"&gt;"man sudo_root"&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;details.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Une instance facilement supprimée via la stack déployée par Pulumi :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~/test&lt;span class="nv"&gt;$ &lt;/span&gt;pulumi destroy
Enter your passphrase to unlock config/secrets
    &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;set &lt;/span&gt;PULUMI_CONFIG_PASSPHRASE or PULUMI_CONFIG_PASSPHRASE_FILE to remember&lt;span class="o"&gt;)&lt;/span&gt;:  
Enter your passphrase to unlock config/secrets
Previewing destroy &lt;span class="o"&gt;(&lt;/span&gt;dev&lt;span class="o"&gt;)&lt;/span&gt;:
     Type Name Plan       
 - pulumi:pulumi:Stack test-dev delete     
 - └─ openstack:compute:Instance test-pulumi delete     

Outputs:
  - instance_ip: &lt;span class="s2"&gt;"192.168.0.227"&lt;/span&gt;

Resources:
    - 2 to delete

Do you want to perform this destroy? &lt;span class="nb"&gt;yes
&lt;/span&gt;Destroying &lt;span class="o"&gt;(&lt;/span&gt;dev&lt;span class="o"&gt;)&lt;/span&gt;:
     Type Name Status              
 - pulumi:pulumi:Stack test-dev deleted &lt;span class="o"&gt;(&lt;/span&gt;0.00s&lt;span class="o"&gt;)&lt;/span&gt;     
 - └─ openstack:compute:Instance test-pulumi deleted &lt;span class="o"&gt;(&lt;/span&gt;10s&lt;span class="o"&gt;)&lt;/span&gt;       

Outputs:
  - instance_ip: &lt;span class="s2"&gt;"192.168.0.227"&lt;/span&gt;

Resources:
    - 2 deleted

Duration: 11s

The resources &lt;span class="k"&gt;in &lt;/span&gt;the stack have been deleted, but the &lt;span class="nb"&gt;history &lt;/span&gt;and configuration associated with the stack are still maintained. 
If you want to remove the stack completely, run &lt;span class="sb"&gt;`&lt;/span&gt;pulumi stack &lt;span class="nb"&gt;rm &lt;/span&gt;dev&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;

&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~/test&lt;span class="nv"&gt;$ &lt;/span&gt;openstack server list
+--------------------------------------+-----------+--------+-----------------------------------------+--------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-----------+--------+-----------------------------------------+--------+---------+
| efe46971-56f4-4da4-9c6e-eebee2795b72 | instance1 | ACTIVE | demo-network&lt;span class="o"&gt;=&lt;/span&gt;172.16.2.31, 192.168.0.166 | ubuntu | m1.tiny |
+--------------------------------------+-----------+--------+-----------------------------------------+--------+---------+

&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@microstack:~/test&lt;span class="nv"&gt;$ &lt;/span&gt;pulumi stack &lt;span class="nb"&gt;rm &lt;/span&gt;dev
This will permanently remove the &lt;span class="s1"&gt;'dev'&lt;/span&gt; stack!
Please confirm that this is what you&lt;span class="s1"&gt;'d like to do by typing `dev`: dev
Stack '&lt;/span&gt;dev&lt;span class="s1"&gt;' has been removed!
(base) ubuntu@microstack:~/test$ pulumi stack ls
NAME LAST UPDATE RESOURCE COUNT
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;La pile d’observabilité Canonical (COS) a été déployée précedemment. MicroStack propagera automatiquement les métriques et les tableaux de bord par défaut, vous permettant de surveiller sans effort l’état de votre déploiement de Sunbeam à un ou plusieurs nœuds sans avoir besoin d’une configuration supplémentaire via Grafana (dont on a obtenu les identifiants via Juju) :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcnm982bwfjjqdythhbn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcnm982bwfjjqdythhbn.png" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1uy5vwpxoa2hi63ddjwo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1uy5vwpxoa2hi63ddjwo.png" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3vpbkobh4tbtcq8jcw4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3vpbkobh4tbtcq8jcw4.png" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsc1awbmv29eylgiut4i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsc1awbmv29eylgiut4i.png" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgt0r9060m67rf1ps4niu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgt0r9060m67rf1ps4niu.png" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauqmuebf6xoyirqqawcq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauqmuebf6xoyirqqawcq.png" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9y7lagi2zrc0oui8m1nq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9y7lagi2zrc0oui8m1nq.png" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgbb1e9l1dsq6v81ikwl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgbb1e9l1dsq6v81ikwl.png" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Et le tout (dans ce cas précis) pour une consommation conséquente …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4ns3zmqexn1010vg7qu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4ns3zmqexn1010vg7qu.png" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pour des déploiements plus avancés, MicroStack supporte également les clusters multi-nœuds. Vous pouvez suivre le tutoriel détaillé sur la documentation officielle de Canonical pour déployer un cluster OpenStack multi-nœuds via Sunbeam et MAAS :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://canonical.com/microstack/docs/multi-node" rel="noopener noreferrer"&gt;Multi-node | Canonical&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://canonical.com/microstack/docs/multi-node-maas" rel="noopener noreferrer"&gt;Multi-node with MAAS | Canonical&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Comme on a pule voir, MicroStack est conçu pour être hautemement personnalisable , permettant l’intégration de divers backends de stockage tels que Cinder (stockage en bloc) et Swift (stockage d’objets).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://canonical.com/microstack/docs/object-storage" rel="noopener noreferrer"&gt;Object Storage | Canonical&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Les entreprises peuvent adapter la plateforme à leurs besoins uniques en intégrant des plug-ins et des extensions spécifiques. MicroStack offre une solution de cloud privée légère, facile à installer et à utiliser, ce qui en fait un choix idéal pour les organisations cherchant à déployer rapidement un environnement cloud sans la complexité traditionnelle associée à OpenStack. Avec sa flexibilité et sa capacité de customisation, MicroStack est une solution robuste et adaptable pour une variété de besoins en infrastructure cloud y compris pour le contexte de l’Edge Computing …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F563uzhqqa1r3a1gm7z7u.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F563uzhqqa1r3a1gm7z7u.jpg" width="534" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ubuntu</category>
      <category>openstack</category>
      <category>devops</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Une autre manière de faire fonctionner OpenStack au-dessus de Kubernetes avec Atmosphere …</title>
      <dc:creator>Karim</dc:creator>
      <pubDate>Mon, 03 Jun 2024 15:53:30 +0000</pubDate>
      <link>https://forem.com/deep75/une-autre-maniere-de-faire-fonctionner-openstack-au-dessus-de-kubernetes-avec-atmosphere--jba</link>
      <guid>https://forem.com/deep75/une-autre-maniere-de-faire-fonctionner-openstack-au-dessus-de-kubernetes-avec-atmosphere--jba</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Au_-0fC4skAZuh9AVXmpT7Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Au_-0fC4skAZuh9AVXmpT7Q.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Plusieurs projets de déploiement d’OpenStack qui reposent sur un cluster Kubernetes existent à commencer par OpenStack Helm :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://wiki.openstack.org/wiki/Openstack-helm" rel="noopener noreferrer"&gt;Openstack-helm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openstack/openstack-helm" rel="noopener noreferrer"&gt;GitHub - openstack/openstack-helm: Helm charts for deploying OpenStack on Kubernetes. Mirror of code maintained at opendev.org.&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Mais c’est aussi le cas du projet open source Atmosphere du fournisseur Cloud Vexxhost qui fournit un large éventail de fonctionnalités d’infrastructure, notamment des machines virtuelles, Kubernetes, du bare metal, du stockage en bloc et en objet, des équilibreurs de charge en tant que service, et plus encore …&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://vexxhost.com/platform/" rel="noopener noreferrer"&gt;Atmosphere deployment tool | VEXXHOST&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Atmosphere fait tourner OpenStack au-dessus de Kubernetes. Cette configuration permet des déploiements, des mises à niveau et des contrôles de santé simples et facile.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://vexxhost.com/company/newsroom/vexxhost-launches-atmosphere-a-new-open-source-openstack-deployment-tool/" rel="noopener noreferrer"&gt;VEXXHOST Launches Atmosphere, a New Open Source, OpenStack Deployment Tool&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/wHrTGBdqmmI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/vexxhost/atmosphere" rel="noopener noreferrer"&gt;GitHub - vexxhost/atmosphere: Simple &amp;amp; easy private cloud platform featuring VMs, Kubernetes &amp;amp; bare-metal&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Je vais simplement suivre le modus operandi proposé par Vexxhost pour un serveur Bare Metal :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F583%2F0%2ATR5ELWDZxFDXLarG.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F583%2F0%2ATR5ELWDZxFDXLarG.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://vexxhost.github.io/atmosphere/quick-start.html" rel="noopener noreferrer"&gt;Quick Start - Atmosphere 1.10.4.post231.dev0+f2e3b279 documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://deep75.medium.com/openstack-sur-lxd-avec-juju-et-k3sup-dans-phoenixnap-e5867a487497" rel="noopener noreferrer"&gt;OpenStack sur LXD avec Juju et k3sup dans phoenixNAP …&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/deep75/openstack-on-lxd-et-k0s-dans-un-serveur-bare-metal-arm-64-bits-d-aws-5hce"&gt;“OpenStack on LXD” et “k0s” dans un serveur Bare Metal ARM 64 Bits d’AWS …&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/deep75/creer-simplement-un-cluster-k8s-dans-phoenixnap-avec-rancher-en-quelques-clics--3d6i"&gt;Créer simplement un cluster k8s dans PhoenixNAP avec Rancher en quelques clics ...&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pour cela je pars d’un serveur dans PhoenixNAP sous Ubuntu 22.04 LTS :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AUlAO_YM2O1eRwsUv6BYwoQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AUlAO_YM2O1eRwsUv6BYwoQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ArN5mS70yck-ErBC_Yp8isg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ArN5mS70yck-ErBC_Yp8isg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AZk3XoBUwg29VNcuNrXW0yQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AZk3XoBUwg29VNcuNrXW0yQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mise à jour de ce dernier et installation des paquets nécessaires au lancement du déploiement d’Atmosphere :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@atmosphere:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;git python3-pip &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;poetry

ubuntu@atmosphere:~&lt;span class="nv"&gt;$ &lt;/span&gt;git clone https://github.com/vexxhost/atmosphere.git
Cloning into &lt;span class="s1"&gt;'atmosphere'&lt;/span&gt;...
remote: Enumerating objects: 25111, &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
remote: Counting objects: 100% &lt;span class="o"&gt;(&lt;/span&gt;825/825&lt;span class="o"&gt;)&lt;/span&gt;, &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
remote: Compressing objects: 100% &lt;span class="o"&gt;(&lt;/span&gt;398/398&lt;span class="o"&gt;)&lt;/span&gt;, &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
remote: Total 25111 &lt;span class="o"&gt;(&lt;/span&gt;delta 377&lt;span class="o"&gt;)&lt;/span&gt;, reused 704 &lt;span class="o"&gt;(&lt;/span&gt;delta 306&lt;span class="o"&gt;)&lt;/span&gt;, pack-reused 24286
Receiving objects: 100% &lt;span class="o"&gt;(&lt;/span&gt;25111/25111&lt;span class="o"&gt;)&lt;/span&gt;, 10.78 MiB | 50.17 MiB/s, &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
Resolving deltas: 100% &lt;span class="o"&gt;(&lt;/span&gt;14299/14299&lt;span class="o"&gt;)&lt;/span&gt;, &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
ubuntu@atmosphere:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;atmosphere/
ubuntu@atmosphere:~/atmosphere&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls
&lt;/span&gt;CHANGELOG.md Jenkinsfile build doc galaxy.yml hack meta playbooks pyproject.toml test-requirements.txt tox.ini
Dockerfile README.md charts flake.lock go.mod images mkdocs.yml plugins release-please-config.json tests zuul.d
Earthfile atmosphere cmd flake.nix go.sum internal molecule poetry.lock roles tools

ubuntu@atmosphere:~/atmosphere&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;poetry &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--with&lt;/span&gt; dev
Creating virtualenv atmosphere-NEvTTHEY-py3.10 &lt;span class="k"&gt;in&lt;/span&gt; /root/.cache/pypoetry/virtualenvs
Installing dependencies from lock file

Package operations: 89 installs, 0 updates, 0 removals

  - Installing attrs &lt;span class="o"&gt;(&lt;/span&gt;23.2.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing pycparser &lt;span class="o"&gt;(&lt;/span&gt;2.22&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing rpds-py &lt;span class="o"&gt;(&lt;/span&gt;0.18.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing cffi &lt;span class="o"&gt;(&lt;/span&gt;1.16.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing markupsafe &lt;span class="o"&gt;(&lt;/span&gt;2.1.5&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing mdurl &lt;span class="o"&gt;(&lt;/span&gt;0.1.2&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing referencing &lt;span class="o"&gt;(&lt;/span&gt;0.35.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing cryptography &lt;span class="o"&gt;(&lt;/span&gt;42.0.5&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing jinja2 &lt;span class="o"&gt;(&lt;/span&gt;3.1.3&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing jsonschema-specifications &lt;span class="o"&gt;(&lt;/span&gt;2023.12.1&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing markdown-it-py &lt;span class="o"&gt;(&lt;/span&gt;3.0.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing packaging &lt;span class="o"&gt;(&lt;/span&gt;24.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing pbr &lt;span class="o"&gt;(&lt;/span&gt;6.0.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing pygments &lt;span class="o"&gt;(&lt;/span&gt;2.17.2&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing pyyaml &lt;span class="o"&gt;(&lt;/span&gt;6.0.1&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing resolvelib &lt;span class="o"&gt;(&lt;/span&gt;1.0.1&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing wrapt &lt;span class="o"&gt;(&lt;/span&gt;1.16.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing ansible-core &lt;span class="o"&gt;(&lt;/span&gt;2.16.6&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing bracex &lt;span class="o"&gt;(&lt;/span&gt;2.4&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing certifi &lt;span class="o"&gt;(&lt;/span&gt;2024.2.2&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing charset-normalizer &lt;span class="o"&gt;(&lt;/span&gt;3.3.2&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing click &lt;span class="o"&gt;(&lt;/span&gt;8.1.7&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing debtcollector &lt;span class="o"&gt;(&lt;/span&gt;3.0.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing idna &lt;span class="o"&gt;(&lt;/span&gt;3.7&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing iso8601 &lt;span class="o"&gt;(&lt;/span&gt;2.1.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing jsonschema &lt;span class="o"&gt;(&lt;/span&gt;4.21.1&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing netaddr &lt;span class="o"&gt;(&lt;/span&gt;0.8.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing netifaces &lt;span class="o"&gt;(&lt;/span&gt;0.11.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing oslo-i18n &lt;span class="o"&gt;(&lt;/span&gt;6.3.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing pyparsing &lt;span class="o"&gt;(&lt;/span&gt;3.1.2&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing rich &lt;span class="o"&gt;(&lt;/span&gt;13.7.1&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing subprocess-tee &lt;span class="o"&gt;(&lt;/span&gt;0.4.1&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing tzdata &lt;span class="o"&gt;(&lt;/span&gt;2024.1&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing urllib3 &lt;span class="o"&gt;(&lt;/span&gt;2.2.1&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing ansible-compat &lt;span class="o"&gt;(&lt;/span&gt;4.1.11&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing click-help-colors &lt;span class="o"&gt;(&lt;/span&gt;0.9.4&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing decorator &lt;span class="o"&gt;(&lt;/span&gt;5.1.1&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing distro &lt;span class="o"&gt;(&lt;/span&gt;1.9.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing enrich &lt;span class="o"&gt;(&lt;/span&gt;1.2.7&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing exceptiongroup &lt;span class="o"&gt;(&lt;/span&gt;1.2.1&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing iniconfig &lt;span class="o"&gt;(&lt;/span&gt;2.0.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing jsonpointer &lt;span class="o"&gt;(&lt;/span&gt;2.4&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing mccabe &lt;span class="o"&gt;(&lt;/span&gt;0.7.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing msgpack &lt;span class="o"&gt;(&lt;/span&gt;1.0.8&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing os-service-types &lt;span class="o"&gt;(&lt;/span&gt;1.7.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing oslo-utils &lt;span class="o"&gt;(&lt;/span&gt;7.1.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing pluggy &lt;span class="o"&gt;(&lt;/span&gt;1.5.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing pycodestyle &lt;span class="o"&gt;(&lt;/span&gt;2.9.1&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing pyflakes &lt;span class="o"&gt;(&lt;/span&gt;2.5.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing requests &lt;span class="o"&gt;(&lt;/span&gt;2.31.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing rfc3986 &lt;span class="o"&gt;(&lt;/span&gt;2.0.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing six &lt;span class="o"&gt;(&lt;/span&gt;1.16.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing stevedore &lt;span class="o"&gt;(&lt;/span&gt;5.2.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing tomli &lt;span class="o"&gt;(&lt;/span&gt;2.0.1&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing typing-extensions &lt;span class="o"&gt;(&lt;/span&gt;4.11.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing wcmatch &lt;span class="o"&gt;(&lt;/span&gt;8.5.1&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing appdirs &lt;span class="o"&gt;(&lt;/span&gt;1.4.4&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing coverage &lt;span class="o"&gt;(&lt;/span&gt;7.5.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing docker &lt;span class="o"&gt;(&lt;/span&gt;7.0.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing dogpile-cache &lt;span class="o"&gt;(&lt;/span&gt;1.3.2&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing execnet &lt;span class="o"&gt;(&lt;/span&gt;2.1.1&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing flake8 &lt;span class="o"&gt;(&lt;/span&gt;5.0.4&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing isort &lt;span class="o"&gt;(&lt;/span&gt;5.13.2&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing jmespath &lt;span class="o"&gt;(&lt;/span&gt;1.0.1&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing jsonpatch &lt;span class="o"&gt;(&lt;/span&gt;1.33&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing keystoneauth1 &lt;span class="o"&gt;(&lt;/span&gt;5.6.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing molecule &lt;span class="o"&gt;(&lt;/span&gt;6.0.3&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing munch &lt;span class="o"&gt;(&lt;/span&gt;4.0.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing oslo-config &lt;span class="o"&gt;(&lt;/span&gt;9.4.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing oslo-context &lt;span class="o"&gt;(&lt;/span&gt;5.5.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing oslo-serialization &lt;span class="o"&gt;(&lt;/span&gt;5.4.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing py &lt;span class="o"&gt;(&lt;/span&gt;1.11.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing pyinotify &lt;span class="o"&gt;(&lt;/span&gt;0.9.6&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing pytest &lt;span class="o"&gt;(&lt;/span&gt;7.4.4&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing python-dateutil &lt;span class="o"&gt;(&lt;/span&gt;2.9.0.post0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing regex &lt;span class="o"&gt;(&lt;/span&gt;2024.4.28&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing requestsexceptions &lt;span class="o"&gt;(&lt;/span&gt;1.4.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing selinux &lt;span class="o"&gt;(&lt;/span&gt;0.3.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing docker-image-py &lt;span class="o"&gt;(&lt;/span&gt;0.1.12&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing flake8-isort &lt;span class="o"&gt;(&lt;/span&gt;4.2.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing molecule-plugins &lt;span class="o"&gt;(&lt;/span&gt;23.5.3&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing openstacksdk &lt;span class="o"&gt;(&lt;/span&gt;0.62.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing oslo-log &lt;span class="o"&gt;(&lt;/span&gt;5.5.1&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing pytest-cov &lt;span class="o"&gt;(&lt;/span&gt;3.0.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing pytest-forked &lt;span class="o"&gt;(&lt;/span&gt;1.6.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing pytest-mock &lt;span class="o"&gt;(&lt;/span&gt;3.14.0&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing pytest-xdist &lt;span class="o"&gt;(&lt;/span&gt;3.6.1&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing rjsonnet &lt;span class="o"&gt;(&lt;/span&gt;0.5.4&lt;span class="o"&gt;)&lt;/span&gt;
  - Installing ruyaml &lt;span class="o"&gt;(&lt;/span&gt;0.91.0&lt;span class="o"&gt;)&lt;/span&gt;

Installing the current project: atmosphere &lt;span class="o"&gt;(&lt;/span&gt;1.10.4.post186.dev0+779cb921&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Très rapidement, je lance le déploiement qui prend un peu moins d’une heure ici :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@atmosphere:~/atmosphere&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;poetry run molecule converge &lt;span class="nt"&gt;-s&lt;/span&gt; aio
INFO aio scenario &lt;span class="nb"&gt;test &lt;/span&gt;matrix: dependency, create, prepare, converge
INFO Performing prerun with &lt;span class="nv"&gt;role_name_check&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0...
INFO Running aio &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; dependency
WARNING Skipping, missing the requirements file.
WARNING Skipping, missing the requirements file.
INFO Running aio &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; create

PLAY &lt;span class="o"&gt;[&lt;/span&gt;Wait &lt;span class="k"&gt;for &lt;/span&gt;user to &lt;span class="nb"&gt;read &lt;/span&gt;warning] &lt;span class="k"&gt;*******************************************&lt;/span&gt;

TASK &lt;span class="o"&gt;[&lt;/span&gt;Gathering Facts] &lt;span class="k"&gt;*********************************************************&lt;/span&gt;
Monday 03 June 2024 12:02:39 +0000 &lt;span class="o"&gt;(&lt;/span&gt;0:00:00.020&lt;span class="o"&gt;)&lt;/span&gt; 0:00:00.020 &lt;span class="k"&gt;***********&lt;/span&gt;
ok: &lt;span class="o"&gt;[&lt;/span&gt;localhost]
&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt;
PLAY &lt;span class="o"&gt;[&lt;/span&gt;Configure networking] &lt;span class="k"&gt;****************************************************&lt;/span&gt;

TASK &lt;span class="o"&gt;[&lt;/span&gt;Gathering Facts] &lt;span class="k"&gt;*********************************************************&lt;/span&gt;
Monday 03 June 2024 12:49:26 +0000 &lt;span class="o"&gt;(&lt;/span&gt;0:00:00.418&lt;span class="o"&gt;)&lt;/span&gt; 0:45:34.621 &lt;span class="k"&gt;***********&lt;/span&gt;
ok: &lt;span class="o"&gt;[&lt;/span&gt;instance]

TASK &lt;span class="o"&gt;[&lt;/span&gt;Add IP address to &lt;span class="s2"&gt;"br-ex"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="k"&gt;***********************************************&lt;/span&gt;
Monday 03 June 2024 12:49:28 +0000 &lt;span class="o"&gt;(&lt;/span&gt;0:00:02.159&lt;span class="o"&gt;)&lt;/span&gt; 0:45:36.781 &lt;span class="k"&gt;***********&lt;/span&gt;
ok: &lt;span class="o"&gt;[&lt;/span&gt;instance]

TASK &lt;span class="o"&gt;[&lt;/span&gt;Set &lt;span class="s2"&gt;"br-ex"&lt;/span&gt; interface to &lt;span class="s2"&gt;"up"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="k"&gt;*******************************************&lt;/span&gt;
Monday 03 June 2024 12:49:28 +0000 &lt;span class="o"&gt;(&lt;/span&gt;0:00:00.153&lt;span class="o"&gt;)&lt;/span&gt; 0:45:36.934 &lt;span class="k"&gt;***********&lt;/span&gt;
ok: &lt;span class="o"&gt;[&lt;/span&gt;instance]

PLAY RECAP &lt;span class="k"&gt;*********************************************************************&lt;/span&gt;
instance : &lt;span class="nv"&gt;ok&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;669 &lt;span class="nv"&gt;changed&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;267 &lt;span class="nv"&gt;unreachable&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="nv"&gt;failed&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="nv"&gt;skipped&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;246 &lt;span class="nv"&gt;rescued&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="nv"&gt;ignored&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1   

Monday 03 June 2024 12:49:28 +0000 &lt;span class="o"&gt;(&lt;/span&gt;0:00:00.304&lt;span class="o"&gt;)&lt;/span&gt; 0:45:37.239 &lt;span class="k"&gt;***********&lt;/span&gt;
&lt;span class="o"&gt;===============================================================================&lt;/span&gt;
vexxhost.atmosphere.percona_xtradb_cluster : Apply Percona XtraDB cluster - 203.51s
vexxhost.atmosphere.cinder : Deploy Helm chart &lt;span class="nt"&gt;------------------------&lt;/span&gt; 157.84s
vexxhost.atmosphere.keycloak : Deploy Helm chart &lt;span class="nt"&gt;----------------------&lt;/span&gt; 156.65s
vexxhost.atmosphere.heat : Deploy Helm chart &lt;span class="nt"&gt;--------------------------&lt;/span&gt; 120.67s
vexxhost.atmosphere.manila : Deploy Helm chart &lt;span class="nt"&gt;------------------------&lt;/span&gt; 104.84s
vexxhost.atmosphere.nova : Deploy Helm chart &lt;span class="nt"&gt;--------------------------&lt;/span&gt; 100.67s
vexxhost.ceph.osd : Install OSDs &lt;span class="nt"&gt;---------------------------------------&lt;/span&gt; 89.67s
vexxhost.atmosphere.glance : Deploy Helm chart &lt;span class="nt"&gt;-------------------------&lt;/span&gt; 88.95s
vexxhost.atmosphere.magnum : Deploy Helm chart &lt;span class="nt"&gt;-------------------------&lt;/span&gt; 88.06s
vexxhost.atmosphere.octavia : Deploy Helm chart &lt;span class="nt"&gt;------------------------&lt;/span&gt; 83.51s
vexxhost.atmosphere.keystone : Deploy Helm chart &lt;span class="nt"&gt;-----------------------&lt;/span&gt; 80.67s
vexxhost.atmosphere.neutron : Deploy Helm chart &lt;span class="nt"&gt;------------------------&lt;/span&gt; 74.13s
vexxhost.atmosphere.barbican : Deploy Helm chart &lt;span class="nt"&gt;-----------------------&lt;/span&gt; 67.48s
vexxhost.ceph.mon : Run Bootstrap coomand &lt;span class="nt"&gt;------------------------------&lt;/span&gt; 62.21s
vexxhost.atmosphere.placement : Deploy Helm chart &lt;span class="nt"&gt;----------------------&lt;/span&gt; 58.82s
vexxhost.kubernetes.cluster_api : Set node selector &lt;span class="k"&gt;for &lt;/span&gt;Cluster API components &lt;span class="nt"&gt;--&lt;/span&gt; 57.15s
vexxhost.atmosphere.glance_image : Download image &lt;span class="nt"&gt;----------------------&lt;/span&gt; 53.93s
vexxhost.atmosphere.glance_image : Check &lt;span class="k"&gt;if &lt;/span&gt;image exists &lt;span class="nt"&gt;---------------&lt;/span&gt; 50.38s
vexxhost.atmosphere.neutron : Create networks &lt;span class="nt"&gt;--------------------------&lt;/span&gt; 36.62s
vexxhost.atmosphere.rabbitmq : Deploy cluster &lt;span class="nt"&gt;--------------------------&lt;/span&gt; 35.65s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Le cluster est déployé et je peux vérifier les accès disponibles localement sur ce serveur physique :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@instance:~# apt &lt;span class="nb"&gt;install &lt;/span&gt;python3-openstackclient &lt;span class="nt"&gt;-y&lt;/span&gt;

root@instance:~# &lt;span class="nb"&gt;source &lt;/span&gt;openrc
root@instance:~# openstack endpoint list
+----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------------------------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------------------------------------------------------+
| 0584539654844f7f956088e43836ed4c | RegionOne | swift | object-store | True | public | https://object-store.131-153-200-197.nip.io/swift/v1/%&lt;span class="o"&gt;(&lt;/span&gt;tenant_id&lt;span class="o"&gt;)&lt;/span&gt;s |
| 069df7eb37584d91a22a95e7d36493d3 | RegionOne | manila | share | True | internal | http://manila-api.openstack.svc.cluster.local:8786/v1 |
| 08b5e9b96ddf4ce79f270f582dc6bf7b | RegionOne | manilav2 | sharev2 | True | public | https://share.131-153-200-197.nip.io/v2 |
| 0b95720b8c0e4de692acf18b769ce8be | RegionOne | heat-cfn | cloudformation | True | admin | http://heat-cfn.openstack.svc.cluster.local:8000/v1 |
| 120958efeb924a0a98bf03ce807c0ace | RegionOne | cinderv3 | volumev3 | True | internal | http://cinder-api.openstack.svc.cluster.local:8776/v3/%&lt;span class="o"&gt;(&lt;/span&gt;tenant_id&lt;span class="o"&gt;)&lt;/span&gt;s |
| 1427e413681e4518ac03d768bb44cb60 | RegionOne | placement | placement | True | admin | http://placement-api.openstack.svc.cluster.local:8778/ |
| 1611e222d7b8447596aa44c0965aec1a | RegionOne | cinderv3 | volumev3 | True | public | https://volume.131-153-200-197.nip.io/v3/%&lt;span class="o"&gt;(&lt;/span&gt;tenant_id&lt;span class="o"&gt;)&lt;/span&gt;s |
| 19047158ec1446b5b746682bc8d9dd93 | RegionOne | heat-cfn | cloudformation | True | internal | http://heat-cfn.openstack.svc.cluster.local:8000/v1 |
| 1e04bacc8aae44b3818de40e386cf68e | RegionOne | barbican | key-manager | True | admin | http://barbican-api.openstack.svc.cluster.local:9311/ |
| 27d7b0d81a104b54a3dcb8632297707f | RegionOne | keystone | identity | True | admin | http://keystone-api.openstack.svc.cluster.local:5000/ |
| 2c4218a3da784f98890bcb6ac6cb20ae | RegionOne | heat | orchestration | True | admin | http://heat-api.openstack.svc.cluster.local:8004/v1/%&lt;span class="o"&gt;(&lt;/span&gt;project_id&lt;span class="o"&gt;)&lt;/span&gt;s |
| 2f19518bd66c421eb6a5e25f7c93c96e | RegionOne | manila | share | True | public | http://manila.openstack.svc.cluster.local/v1 |
| 3578d37453854cfbafdc18f2be4f0a62 | RegionOne | glance | image | True | public | https://image.131-153-200-197.nip.io/ |
| 3a2e106afdbf43689676c85718319f88 | RegionOne | glance | image | True | admin | http://glance-api.openstack.svc.cluster.local:9292/ |
| 3e3240df5df847bc84c204e0b18783f1 | RegionOne | glance | image | True | internal | http://glance-api.openstack.svc.cluster.local:9292/ |
| 3f51c888203c433ea31d2ca67cf3e359 | RegionOne | manilav2 | sharev2 | True | admin | http://manila-api.openstack.svc.cluster.local:8786/v2 |
| 44f08551b541410d8bb28b3a148dca0f | RegionOne | heat | orchestration | True | internal | http://heat-api.openstack.svc.cluster.local:8004/v1/%&lt;span class="o"&gt;(&lt;/span&gt;project_id&lt;span class="o"&gt;)&lt;/span&gt;s |
| 6501cca34cd5401d8561522f062ec126 | RegionOne | magnum | container-infra | True | internal | http://magnum-api.openstack.svc.cluster.local:9511/v1 |
| 65c4f687017d4ff4b774a61cd670de52 | RegionOne | barbican | key-manager | True | public | https://key-manager.131-153-200-197.nip.io/ |
| 695342e32bf84ae48cc0c873227ce1ce | RegionOne | heat | orchestration | True | public | https://orchestration.131-153-200-197.nip.io/v1/%&lt;span class="o"&gt;(&lt;/span&gt;project_id&lt;span class="o"&gt;)&lt;/span&gt;s |
| 71af00dd937e49c28827dadbe6d55bbe | RegionOne | swift | object-store | True | internal | http://rook-ceph-rgw-ceph.openstack.svc.cluster.local/swift/v1/%&lt;span class="o"&gt;(&lt;/span&gt;tenant_id&lt;span class="o"&gt;)&lt;/span&gt;s |
| 7ada3efd05b64af1ae726d495fe61a6e | RegionOne | cinderv3 | volumev3 | True | admin | http://cinder-api.openstack.svc.cluster.local:8776/v3/%&lt;span class="o"&gt;(&lt;/span&gt;tenant_id&lt;span class="o"&gt;)&lt;/span&gt;s |
| 87e74af69cbb4577b4dd29d106aed5fa | RegionOne | magnum | container-infra | True | public | https://container-infra.131-153-200-197.nip.io/v1 |
| 8dae31f3eed946eea30025e56fbfd83a | RegionOne | nova | compute | True | admin | http://nova-api.openstack.svc.cluster.local:8774/v2.1 |
| 8e562226dccc4df0af3c8cdfdec95084 | RegionOne | barbican | key-manager | True | internal | http://barbican-api.openstack.svc.cluster.local:9311/ |
| 8f629f7e748d4c3087bf17aaa0ce2e47 | RegionOne | manila | share | True | admin | http://manila-api.openstack.svc.cluster.local:8786/v1 |
| 91149164a0584145b6344ae1d525457b | RegionOne | keystone | identity | True | public | https://identity.131-153-200-197.nip.io/ |
| a936e3ca42a54b6e97672f20e17c095b | RegionOne | neutron | network | True | admin | http://neutron-server.openstack.svc.cluster.local:9696/ |
| b987ba9af66c4a14ba42d1fd8c1b2285 | RegionOne | placement | placement | True | public | https://placement.131-153-200-197.nip.io/ |
| b9e2e43e90d649d5b6089c7474ba30cd | RegionOne | octavia | load-balancer | True | internal | http://octavia-api.openstack.svc.cluster.local:9876/ |
| bf2839c099f946e39d21a0b6bcad0b87 | RegionOne | neutron | network | True | public | https://network.131-153-200-197.nip.io/ |
| bf959da766554e91972fe1da9cff6d8e | RegionOne | heat-cfn | cloudformation | True | public | https://cloudformation.131-153-200-197.nip.io/v1 |
| d027a62b34ca4eaabc508c04ad8f94e7 | RegionOne | magnum | container-infra | True | admin | http://magnum-api.openstack.svc.cluster.local:9511/v1 |
| d3e857109be14bfabf82e962068b54af | RegionOne | nova | compute | True | public | https://compute.131-153-200-197.nip.io/v2.1 |
| d6985c01aca54ec49e0ec1c728fc271b | RegionOne | nova | compute | True | internal | http://nova-api.openstack.svc.cluster.local:8774/v2.1 |
| d74feae56d034bd7ad421e4588f0731e | RegionOne | octavia | load-balancer | True | admin | http://octavia-api.openstack.svc.cluster.local:9876/ |
| e7bcb22c547e41489cc7d5732bca4a84 | RegionOne | manilav2 | sharev2 | True | internal | http://manila-api.openstack.svc.cluster.local:8786/v2 |
| f005b62e7769497eb54b0c0a3fa3c587 | RegionOne | octavia | load-balancer | True | public | https://load-balancer.131-153-200-197.nip.io/ |
| f40bb006a4984428b581042a78539ef8 | RegionOne | neutron | network | True | internal | http://neutron-server.openstack.svc.cluster.local:9696/ |
| f4d8af60368645c7a5f1d9605afb5494 | RegionOne | placement | placement | True | internal | http://placement-api.openstack.svc.cluster.local:8778/ |
| f5aa8f8ba519480199f224800480e8cb | RegionOne | keystone | identity | True | internal | http://keystone-api.openstack.svc.cluster.local:5000/ |
+----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------------------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ainsi que des accès via l’Ingress Controller du cluster Kubernetes sous-jacent :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@instance:~# kubectl cluster-info
Kubernetes control plane is running at https://10.96.240.10:6443
CoreDNS is running at https://10.96.240.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use &lt;span class="s1"&gt;'kubectl cluster-info dump'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
root@instance:~# kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
instance Ready control-plane 60m v1.28.4 131.153.200.197 &amp;lt;none&amp;gt; Ubuntu 22.04.4 LTS 5.15.0-105-generic containerd://1.7.9
root@instance:~# kubectl get ing &lt;span class="nt"&gt;-A&lt;/span&gt;
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
auth-system keycloak atmosphere keycloak.131-153-200-197.nip.io 10.98.36.135 80, 443 54m
monitoring kube-prometheus-stack-alertmanager atmosphere alertmanager.131-153-200-197.nip.io 10.98.36.135 80, 443 53m
monitoring kube-prometheus-stack-grafana atmosphere grafana.131-153-200-197.nip.io 10.98.36.135 80, 443 53m
monitoring kube-prometheus-stack-prometheus atmosphere prometheus.131-153-200-197.nip.io 10.98.36.135 80, 443 53m
openstack cloudformation atmosphere cloudformation.131-153-200-197.nip.io 10.98.36.135 80, 443 33m
openstack compute atmosphere compute.131-153-200-197.nip.io 10.98.36.135 80, 443 38m
openstack compute-novnc-proxy atmosphere vnc.131-153-200-197.nip.io 10.98.36.135 80, 443 38m
openstack container-infra atmosphere container-infra.131-153-200-197.nip.io 10.98.36.135 80, 443 27m
openstack container-infra-registry atmosphere container-infra-registry.131-153-200-197.nip.io 10.98.36.135 80, 443 27m
openstack dashboard atmosphere dashboard.131-153-200-197.nip.io 10.98.36.135 80, 443 23m
openstack identity atmosphere identity.131-153-200-197.nip.io 10.98.36.135 80, 443 51m
openstack image atmosphere image.131-153-200-197.nip.io 10.98.36.135 80, 443 46m
openstack key-manager atmosphere key-manager.131-153-200-197.nip.io 10.98.36.135 80, 443 49m
openstack load-balancer atmosphere load-balancer.131-153-200-197.nip.io 10.98.36.135 80, 443 30m
openstack network atmosphere network.131-153-200-197.nip.io 10.98.36.135 80, 443 36m
openstack orchestration atmosphere orchestration.131-153-200-197.nip.io 10.98.36.135 80, 443 33m
openstack placement atmosphere placement.131-153-200-197.nip.io 10.98.36.135 80, 443 41m
openstack rook-ceph-cluster atmosphere object-store.131-153-200-197.nip.io 10.98.36.135 80, 443 48m
openstack sharev2 atmosphere share.131-153-200-197.nip.io 10.98.36.135 80, 443 23m
openstack volumev3 atmosphere volume.131-153-200-197.nip.io 10.98.36.135 80, 443 42m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Le domaine WildCard utilisé ici me donne accès au Dashboard Horizon pour plus de simplicité :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ADgAT7LOW2SUDS1_Uuu4rfQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ADgAT7LOW2SUDS1_Uuu4rfQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AZDPH1m97k1ZLtooPAasgoA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AZDPH1m97k1ZLtooPAasgoA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AWERMZDhjH5dqW1vbWlADzg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AWERMZDhjH5dqW1vbWlADzg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Je charge une image locale d’Ubuntu 24.04 LTS dans le cluster :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@instance:~# &lt;span class="nb"&gt;cat &lt;/span&gt;openrc 
&lt;span class="c"&gt;# Ansible managed: Do NOT edit this file manually!&lt;/span&gt;

&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_IDENTITY_API_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3

&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_AUTH_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://identity.131-153-200-197.nip.io/v3"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_AUTH_TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;password
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_REGION_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"RegionOne"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_USER_DOMAIN_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Default
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_USERNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"admin-RegionOne"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"lzi232PTaHpzoC2HjwSLKepZELQd6ENJ"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_PROJECT_DOMAIN_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Default
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_PROJECT_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;admin

&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OS_CACERT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/share/ca-certificates/atmosphere.crt

root@instance:~# wget &lt;span class="nt"&gt;-c&lt;/span&gt; https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img

Saving to: ‘noble-server-cloudimg-amd64.img’

noble-server-cloudimg-amd64.img 100%[&lt;span class="o"&gt;=====================================================================================================&amp;gt;]&lt;/span&gt; 454.00M 17.9MB/s &lt;span class="k"&gt;in &lt;/span&gt;30s     

2024-06-03 13:22:40 &lt;span class="o"&gt;(&lt;/span&gt;15.3 MB/s&lt;span class="o"&gt;)&lt;/span&gt; - ‘noble-server-cloudimg-amd64.img’ saved &lt;span class="o"&gt;[&lt;/span&gt;476053504/476053504]

root@instance:~# openstack image create &lt;span class="nt"&gt;--public&lt;/span&gt; &lt;span class="nt"&gt;--container-format&lt;/span&gt; bare &lt;span class="nt"&gt;--disk-format&lt;/span&gt; qcow2 &lt;span class="nt"&gt;--file&lt;/span&gt; ~/noble-server-cloudimg-amd64.img ubuntu-24.04
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| container_format | bare |
| created_at | 2024-06-03T13:23:12Z |
| disk_format | qcow2 |
| file | /v2/images/81d8eafa-4054-455c-9640-4e83c0566d21/file |
| &lt;span class="nb"&gt;id&lt;/span&gt; | 81d8eafa-4054-455c-9640-4e83c0566d21 |
| min_disk | 0 |
| min_ram | 0 |
| name | ubuntu-24.04 |
| owner | 43321f42e8434f8aa53531bd104e2809 |
| properties | &lt;span class="nv"&gt;locations&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'[]'&lt;/span&gt;, &lt;span class="nv"&gt;os_hidden&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'False'&lt;/span&gt;, owner_specified.openstack.md5&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;''&lt;/span&gt;, owner_specified.openstack.object&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'images/ubuntu-24.04'&lt;/span&gt;, owner_specified.openstack.sha256&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;''&lt;/span&gt; |
| protected | False |
| schema | /v2/schemas/image |
| status | queued |
| tags | |
| updated_at | 2024-06-03T13:23:12Z |
| visibility | public |
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+

root@instance:~# openstack image list
+--------------------------------------+---------------------------+--------+
| ID | Name | Status |
+--------------------------------------+---------------------------+--------+
| 04e2f39a-e4ff-426b-aeee-b82acd3bf611 | amphora-x64-haproxy | active |
| 49a3b09d-c191-4c36-8541-31efaffb404d | cirros | active |
| b7e75cdb-37b0-4f2a-badf-cfdee7bca83d | manila-service-image | active |
| 3c5df46c-7015-411c-8009-afa6695672a6 | ubuntu-2204-kube-v1.27.8s | active |
| 81d8eafa-4054-455c-9640-4e83c0566d21 | ubuntu-24.04 | active |
+--------------------------------------+---------------------------+--------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A8u2DwINnORNfR-t1ewHMvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A8u2DwINnORNfR-t1ewHMvg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configuration d’une topologie réseau pour un rapide test :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AMpiV-Sd046CPZj3qgpIVBg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AMpiV-Sd046CPZj3qgpIVBg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Avec l’image chargée dans Glance, lancement de trois petites instances Ubuntu 24.04 LTS …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A0_x7xPMDO2Vyvho5RJ_PEQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A0_x7xPMDO2Vyvho5RJ_PEQ.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@instance:~# openstack server list
+--------------------------------------+-------+--------+--------------------------------------+--------------+----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------+--------+--------------------------------------+--------------+----------+
| 07d6bf6e-edff-4729-9133-31849ea6fe87 | k0s-3 | ACTIVE | &lt;span class="nv"&gt;network1&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.96.250.216, 11.12.13.185 | ubuntu-24.04 | m1.small |
| afd10cda-f99c-4877-9948-a7f25b5e756a | k0s-1 | ACTIVE | &lt;span class="nv"&gt;network1&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.96.250.211, 11.12.13.99 | ubuntu-24.04 | m1.small |
| d70230a1-9fdf-40d9-959d-c9960479b4a5 | k0s-2 | ACTIVE | &lt;span class="nv"&gt;network1&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.96.250.208, 11.12.13.80 | ubuntu-24.04 | m1.small |
+--------------------------------------+-------+--------+--------------------------------------+--------------+----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Un cluster kubernetes simple sur la base de ces trois instances est alors possible avec k0sctl.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/k0sproject/k0sctl" rel="noopener noreferrer"&gt;GitHub - k0sproject/k0sctl: A bootstrapping and management tool for k0s clusters.&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@instance:~# wget -c https://github.com/k0sproject/k0sctl/releases/download/v0.17.8/k0sctl-linux-x64
root@instance:~# mv k0sctl-linux-x64 /usr/local/bin/k0sctl &amp;amp;&amp;amp; chmod +x /usr/local/bin/k0sctl 
root@instance:~# k0sctl
NAME:
   k0sctl - k0s cluster management tool

USAGE:
   k0sctl [global options] command [command options] 

COMMANDS:
   version Output k0sctl version
   apply Apply a k0sctl configuration
   kubeconfig Output the admin kubeconfig of the cluster
   init Create a configuration template
   reset Remove traces of k0s from all of the hosts
   backup Take backup of existing clusters state
   config Configuration related sub-commands
   completion  
   help, h Shows a list of commands or help for one command

GLOBAL OPTIONS:
   --debug, -d Enable debug logging (default: false) [$DEBUG]
   --trace Enable trace logging (default: false) [$TRACE]
   --no-redact Do not hide sensitive information in the output (default: false)
   --help, -h show help

root@instance:~# k0sctl init --k0s &amp;gt; k0sctl.yaml

root@instance:~# cat k0sctl.yaml 
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  hosts:
  - ssh:
      address: 10.96.250.211
      user: ubuntu
      port: 22
      keyPath: /root/.ssh/id_rsa
    role: controller
  - ssh:
      address: 10.96.250.208
      user: ubuntu
      port: 22
      keyPath: /root/.ssh/id_rsa
    role: worker
  - ssh:
      address: 10.96.250.216
      user: ubuntu
      port: 22
      keyPath: /root/.ssh/id_rsa
    role: worker
  k0s:
    config:
      apiVersion: k0s.k0sproject.io/v1beta1
      kind: Cluster
      metadata:
        name: k0s
      spec:
        api:
          k0sApiPort: 9443
          port: 6443
        installConfig:
          users:
            etcdUser: etcd
            kineUser: kube-apiserver
            konnectivityUser: konnectivity-server
            kubeAPIserverUser: kube-apiserver
            kubeSchedulerUser: kube-scheduler
        konnectivity:
          adminPort: 8133
          agentPort: 8132
        network:
          kubeProxy:
            disabled: false
            mode: iptables
          kuberouter:
            autoMTU: true
            mtu: 0
            peerRouterASNs: ""
            peerRouterIPs: ""
          podCIDR: 10.244.0.0/16
          provider: kuberouter
          serviceCIDR: 10.96.0.0/12
        podSecurityPolicy:
          defaultPolicy: 00-k0s-privileged
        storage:
          type: etcd
        telemetry:
          enabled: true

root@instance:~# k0sctl apply --config k0sctl.yaml 

⠀⣿⣿⡇⠀⠀⢀⣴⣾⣿⠟⠁⢸⣿⣿⣿⣿⣿⣿⣿⡿⠛⠁⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀█████████ █████████ ███
⠀⣿⣿⡇⣠⣶⣿⡿⠋⠀⠀⠀⢸⣿⡇⠀⠀⠀⣠⠀⠀⢀⣠⡆⢸⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀███ ███ ███
⠀⣿⣿⣿⣿⣟⠋⠀⠀⠀⠀⠀⢸⣿⡇⠀⢰⣾⣿⠀⠀⣿⣿⡇⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀███ ███ ███
⠀⣿⣿⡏⠻⣿⣷⣤⡀⠀⠀⠀⠸⠛⠁⠀⠸⠋⠁⠀⠀⣿⣿⡇⠈⠉⠉⠉⠉⠉⠉⠉⠉⢹⣿⣿⠀███ ███ ███
⠀⣿⣿⡇⠀⠀⠙⢿⣿⣦⣀⠀⠀⠀⣠⣶⣶⣶⣶⣶⣶⣿⣿⡇⢰⣶⣶⣶⣶⣶⣶⣶⣶⣾⣿⣿⠀█████████ ███ ██████████
k0sctl v0.17.8 Copyright 2023, k0sctl authors.
Anonymized telemetry of usage will be sent to the authors.
By continuing to use k0sctl you agree to these terms:
https://k0sproject.io/licenses/eula
INFO ==&amp;gt; Running phase: Set k0s version  
INFO Looking up latest stable k0s version         
INFO Using k0s version v1.30.1+k0s.0              
INFO ==&amp;gt; Running phase: Connect to hosts 
INFO [ssh] 10.96.250.216:22: connected            
INFO [ssh] 10.96.250.208:22: connected            
INFO [ssh] 10.96.250.211:22: connected            
INFO ==&amp;gt; Running phase: Detect host operating systems 
INFO [ssh] 10.96.250.216:22: is running Ubuntu 24.04 LTS 
INFO [ssh] 10.96.250.208:22: is running Ubuntu 24.04 LTS 
INFO [ssh] 10.96.250.211:22: is running Ubuntu 24.04 LTS 
INFO ==&amp;gt; Running phase: Acquire exclusive host lock 
INFO ==&amp;gt; Running phase: Prepare hosts    
INFO ==&amp;gt; Running phase: Gather host facts 
INFO [ssh] 10.96.250.216:22: using k0s-3 as hostname 
INFO [ssh] 10.96.250.211:22: using k0s-1 as hostname 
INFO [ssh] 10.96.250.208:22: using k0s-2 as hostname 
INFO [ssh] 10.96.250.211:22: discovered ens3 as private interface 
INFO [ssh] 10.96.250.208:22: discovered ens3 as private interface 
INFO [ssh] 10.96.250.216:22: discovered ens3 as private interface 
INFO [ssh] 10.96.250.211:22: discovered 11.12.13.99 as private address 
INFO [ssh] 10.96.250.208:22: discovered 11.12.13.80 as private address 
INFO [ssh] 10.96.250.216:22: discovered 11.12.13.185 as private address 
INFO ==&amp;gt; Running phase: Validate hosts   
INFO ==&amp;gt; Running phase: Validate facts   
INFO ==&amp;gt; Running phase: Download k0s on hosts 
INFO [ssh] 10.96.250.216:22: downloading k0s v1.30.1+k0s.0 
INFO [ssh] 10.96.250.211:22: downloading k0s v1.30.1+k0s.0 
INFO [ssh] 10.96.250.208:22: downloading k0s v1.30.1+k0s.0 
INFO ==&amp;gt; Running phase: Install k0s binaries on hosts 
INFO [ssh] 10.96.250.211:22: validating configuration 
INFO ==&amp;gt; Running phase: Configure k0s    
INFO [ssh] 10.96.250.211:22: installing new configuration 
INFO ==&amp;gt; Running phase: Initialize the k0s cluster 
INFO [ssh] 10.96.250.211:22: installing k0s controller 
INFO [ssh] 10.96.250.211:22: waiting for the k0s service to start 
INFO [ssh] 10.96.250.211:22: waiting for kubernetes api to respond 
INFO ==&amp;gt; Running phase: Install workers  
INFO [ssh] 10.96.250.216:22: validating api connection to https://11.12.13.99:6443 
INFO [ssh] 10.96.250.208:22: validating api connection to https://11.12.13.99:6443 
INFO [ssh] 10.96.250.211:22: generating a join token for worker 1 
INFO [ssh] 10.96.250.211:22: generating a join token for worker 2 
INFO [ssh] 10.96.250.216:22: writing join token   
INFO [ssh] 10.96.250.208:22: writing join token   
INFO [ssh] 10.96.250.216:22: installing k0s worker 
INFO [ssh] 10.96.250.208:22: installing k0s worker 
INFO [ssh] 10.96.250.216:22: starting service     
INFO [ssh] 10.96.250.216:22: waiting for node to become ready 
INFO [ssh] 10.96.250.208:22: starting service     
INFO [ssh] 10.96.250.208:22: waiting for node to become ready 
INFO ==&amp;gt; Running phase: Release exclusive host lock 
INFO ==&amp;gt; Running phase: Disconnect from hosts 
INFO ==&amp;gt; Finished in 42s                 
INFO k0s cluster version v1.30.1+k0s.0 is now installed 
INFO Tip: To access the cluster you can now fetch the admin kubeconfig using: 
INFO k0sctl kubeconfig  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kubernetes via k0s est actif et accessible via son fichier kubeconfigspécifique :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@instance:~# k0sctl kubeconfig &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; kubeconfig
root@instance:~# kubectl &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kubeconfig cluster-info
Kubernetes control plane is running at https://10.96.250.211:6443
CoreDNS is running at https://10.96.250.211:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use &lt;span class="s1"&gt;'kubectl cluster-info dump'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
root@instance:~# kubectl &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kubeconfig get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k0s-2 Ready &amp;lt;none&amp;gt; 2m41s v1.30.1+k0s 11.12.13.80 &amp;lt;none&amp;gt; Ubuntu 24.04 LTS 6.8.0-31-generic containerd://1.7.17
k0s-3 Ready &amp;lt;none&amp;gt; 2m41s v1.30.1+k0s 11.12.13.185 &amp;lt;none&amp;gt; Ubuntu 24.04 LTS 6.8.0-31-generic containerd://1.7.17

root@instance:~# kubectl &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kubeconfig get po,svc &lt;span class="nt"&gt;-A&lt;/span&gt;
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-6997b8f8bd-957bd 1/1 Running 0 2m41s
kube-system pod/coredns-6997b8f8bd-ddr5t 1/1 Running 0 2m41s
kube-system pod/konnectivity-agent-2bxgl 1/1 Running 0 2m51s
kube-system pod/konnectivity-agent-4gfsw 1/1 Running 0 2m51s
kube-system pod/kube-proxy-2cq5w 1/1 Running 0 2m51s
kube-system pod/kube-proxy-m6rnv 1/1 Running 0 2m51s
kube-system pod/kube-router-p9s4t 1/1 Running 0 2m51s
kube-system pod/kube-router-qhcp4 1/1 Running 0 2m51s
kube-system pod/metrics-server-5cd4986bbc-rf4wc 1/1 Running 0 2m57s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
default service/kubernetes ClusterIP 10.96.0.1 &amp;lt;none&amp;gt; 443/TCP 3m15s
kube-system service/kube-dns ClusterIP 10.96.0.10 &amp;lt;none&amp;gt; 53/UDP,53/TCP,9153/TCP 3m5s
kube-system service/metrics-server ClusterIP 10.97.91.41 &amp;lt;none&amp;gt; 443/TCP 3m1s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Le démonstrateur FC en une commande (habituelle maintenant) :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;root@instance:~# cat test.yaml&lt;/span&gt; 
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fcdemo3&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fcdemo3&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fcdemo3&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fcdemo3&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fcdemo3&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mcas/franceconnect-demo2:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fcdemo-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fcdemo3&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;

&lt;span class="s"&gt;root@instance:~# kubectl --kubeconfig=kubeconfig apply -f test.yaml&lt;/span&gt;
&lt;span class="s"&gt;deployment.apps/fcdemo3 created&lt;/span&gt;
&lt;span class="s"&gt;service/fcdemo-service created&lt;/span&gt;

&lt;span class="s"&gt;root@instance:~# kubectl --kubeconfig=kubeconfig get po,svc&lt;/span&gt;
&lt;span class="s"&gt;NAME READY STATUS RESTARTS AGE&lt;/span&gt;
&lt;span class="s"&gt;pod/fcdemo3-85f6bd87c-7jvpk 1/1 Running 0 13s&lt;/span&gt;
&lt;span class="s"&gt;pod/fcdemo3-85f6bd87c-btv9m 1/1 Running 0 13s&lt;/span&gt;
&lt;span class="s"&gt;pod/fcdemo3-85f6bd87c-nlw9x 1/1 Running 0 13s&lt;/span&gt;
&lt;span class="s"&gt;pod/fcdemo3-85f6bd87c-v66bs 1/1 Running 0 13s&lt;/span&gt;

&lt;span class="s"&gt;NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE&lt;/span&gt;
&lt;span class="s"&gt;service/fcdemo-service ClusterIP 10.98.123.24 &amp;lt;none&amp;gt; 80/TCP 13s&lt;/span&gt;
&lt;span class="s"&gt;service/kubernetes ClusterIP 10.96.0.1 &amp;lt;none&amp;gt; 443/TCP 8m19s&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Redirection de port ici pour accès local à ce dernier …&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@instance:~# kubectl &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kubeconfig port-forward service/fcdemo-service 12222:80 &lt;span class="nt"&gt;--address&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'0.0.0.0'&lt;/span&gt;
Forwarding from 0.0.0.0:12222 -&amp;gt; 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ArSxXqKBliaa96EQ1njsfyg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ArSxXqKBliaa96EQ1njsfyg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AIcCUKsL8xABvl4OtHE9AvQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AIcCUKsL8xABvl4OtHE9AvQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Amg3woS3pejwCghRtYJRxFg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Amg3woS3pejwCghRtYJRxFg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AxXzCk76pCz6SZub5y0IbHw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AxXzCk76pCz6SZub5y0IbHw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sachant qu’Atmosphere disposait de Magnum pour déployer un cluster Kubernetes intrinsèque …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A4_SotIS5Dav1vtG8I3nFQw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A4_SotIS5Dav1vtG8I3nFQw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pour conclure, Atmosphere est un projet pour un déploiement simplifié d’un cluster Openstack conteneurisé dans un cluster Kubernetes de base (en reprenant OpenStack Helm) et que l’on retrouve également dans la nouvelle mouture de Canonical MicroStack par exemple …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://microstack.run/" rel="noopener noreferrer"&gt;OpenStack for the edge, micro clouds and developers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F500%2F0%2ALn7ClaZW7nPlURrI.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F500%2F0%2ALn7ClaZW7nPlURrI.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;À suivre !&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>openstack</category>
      <category>docker</category>
      <category>phoenixnap</category>
    </item>
    <item>
      <title>Gérer des conteneurs qui ressemblent à des machines virtuelles avec Bootloose (successeur de…</title>
      <dc:creator>Karim</dc:creator>
      <pubDate>Sun, 31 Mar 2024 18:02:05 +0000</pubDate>
      <link>https://forem.com/deep75/gerer-des-conteneurs-qui-ressemblent-a-des-machines-virtuelles-avec-bootloose-successeur-de-p29</link>
      <guid>https://forem.com/deep75/gerer-des-conteneurs-qui-ressemblent-a-des-machines-virtuelles-avec-bootloose-successeur-de-p29</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6VXRiFt5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AmZMaAMzlDcpc4PXduM2mTQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6VXRiFt5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AmZMaAMzlDcpc4PXduM2mTQ.png" alt="" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dans plusieurs articles précédents, j’avais décrit l’usage de Footloose, un binaire écrit en Go qui permettait de créer des conteneurs qui ressemblent à des machines virtuelles.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://deep75.medium.com/weave-ignite-et-weave-footloose-dans-scaleway-quand-les-machines-virtuelles-se-prennent-pour-des-e28f5312a59f"&gt;Weave Ignite et Weave Footloose dans Scaleway : quand les machines virtuelles se prennent pour des…&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://deep75.medium.com/des-clusters-kubernetes-imbriqu%C3%A9s-avec-ignite-firecracker-containerd-kind-et-rancher-34e5e92c257b"&gt;Des clusters Kubernetes imbriqués avec Ignite, Firecracker, Containerd, Kind et Rancher …&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Ces conteneurs exécutent systemd en tant que PID 1 et un démon ssh qui peut être utilisé pour se connecter au conteneur. Ces “machines” se comportent comme des VM, il est même possible d’y faire tourner dockerd.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://github.com/weaveworks/footloose"&gt;GitHub - weaveworks/footloose: Container Machines - Containers that look like Virtual Machines&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Footloose peut être utilisé pour une variété de tâches, partout où vous aimeriez avoir des machines virtuelles mais voulez des temps de démarrage rapides ou avez besoin de plusieurs d’entre elles.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;C’est au tour de l’équipe à l’origine de k0s de reprendre le flambeau avec Bootloose&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/k0sproject/bootloose"&gt;GitHub - k0sproject/bootloose: Manage containers that look like virtual machines&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sur le même principe …&lt;/p&gt;

&lt;p&gt;Illustration avec le lancement d’une instance Ubuntu 22.04 LTS en ARM64 Bits dans Hetzner Cloud :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4lVOShgs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AJm7aslQiOonyc2E6FW31CQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4lVOShgs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AJm7aslQiOonyc2E6FW31CQ.png" alt="" width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;dans laquelle je commence par installer le moteurDocker :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@ubuntu-16gb-hel1-1:~# curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://get.docker.com | sh -
&lt;span class="c"&gt;# Executing docker install script, commit: e5543d473431b782227f8908005543bb4389b8de&lt;/span&gt;
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; apt-get update &lt;span class="nt"&gt;-qq&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nv"&gt;DEBIAN_FRONTEND&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;noninteractive apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;-qq&lt;/span&gt; apt-transport-https ca-certificates curl &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; 0755 &lt;span class="nt"&gt;-d&lt;/span&gt; /etc/apt/keyrings
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; &lt;span class="s2"&gt;"https://download.docker.com/linux/ubuntu/gpg"&lt;/span&gt; | gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;--yes&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/docker.gpg
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nb"&gt;chmod &lt;/span&gt;a+r /etc/apt/keyrings/docker.gpg
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [arch=arm64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/apt/sources.list.d/docker.list
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; apt-get update &lt;span class="nt"&gt;-qq&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nv"&gt;DEBIAN_FRONTEND&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;noninteractive apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;-qq&lt;/span&gt; docker-ce docker-ce-cli containerd.io docker-compose-plugin docker-ce-rootless-extras docker-buildx-plugin &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null
+ sh &lt;span class="nt"&gt;-c&lt;/span&gt; docker version
Client: Docker Engine - Community
 Version: 26.0.0
 API version: 1.45
 Go version: go1.21.8
 Git commit: 2ae903e
 Built: Wed Mar 20 15:18:14 2024
 OS/Arch: linux/arm64
 Context: default

Server: Docker Engine - Community
 Engine:
  Version: 26.0.0
  API version: 1.45 &lt;span class="o"&gt;(&lt;/span&gt;minimum version 1.24&lt;span class="o"&gt;)&lt;/span&gt;
  Go version: go1.21.8
  Git commit: 8b79278
  Built: Wed Mar 20 15:18:14 2024
  OS/Arch: linux/arm64
  Experimental: &lt;span class="nb"&gt;false
 &lt;/span&gt;containerd:
  Version: 1.6.28
  GitCommit: ae07eda36dd25f8a1b98dfbf587313b99c0190bb
 runc:
  Version: 1.1.12
  GitCommit: v1.1.12-0-g51d5e94
 docker-init:
  Version: 0.19.0
  GitCommit: de40ad0

&lt;span class="o"&gt;================================================================================&lt;/span&gt;

To run Docker as a non-privileged user, consider setting up the
Docker daemon &lt;span class="k"&gt;in &lt;/span&gt;rootless mode &lt;span class="k"&gt;for &lt;/span&gt;your user:

    dockerd-rootless-setuptool.sh &lt;span class="nb"&gt;install

&lt;/span&gt;Visit https://docs.docker.com/go/rootless/ to learn about rootless mode.

To run the Docker daemon as a fully privileged service, but granting non-root
&lt;span class="nb"&gt;users &lt;/span&gt;access, refer to https://docs.docker.com/go/daemon-access/

WARNING: Access to the remote API on a privileged Docker daemon is equivalent
         to root access on the host. Refer to the &lt;span class="s1"&gt;'Docker daemon attack surface'&lt;/span&gt;
         documentation &lt;span class="k"&gt;for &lt;/span&gt;details: https://docs.docker.com/go/attack-surface/

&lt;span class="o"&gt;================================================================================&lt;/span&gt;

root@ubuntu-16gb-hel1-1:~# docker ps &lt;span class="nt"&gt;-a&lt;/span&gt;
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Puis récupération du binaire Bootloose depuis GitHub :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@ubuntu-16gb-hel1-1:~# wget &lt;span class="nt"&gt;-c&lt;/span&gt; https://github.com/k0sproject/bootloose/releases/download/v0.7.3/bootloose-linux-arm64
HTTP request sent, awaiting response... 200 OK
Length: 6291456 &lt;span class="o"&gt;(&lt;/span&gt;6.0M&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;application/octet-stream]
Saving to: ‘bootloose-linux-arm64’

bootloose-linux-arm64 100%[&lt;span class="o"&gt;=====================================================================================================&amp;gt;]&lt;/span&gt; 6.00M 21.8MB/s &lt;span class="k"&gt;in &lt;/span&gt;0.3s    

&lt;span class="o"&gt;(&lt;/span&gt;21.8 MB/s&lt;span class="o"&gt;)&lt;/span&gt; - ‘bootloose-linux-arm64’ saved &lt;span class="o"&gt;[&lt;/span&gt;6291456/6291456]

root@ubuntu-16gb-hel1-1:~# &lt;span class="nb"&gt;chmod&lt;/span&gt; +x bootloose-linux-arm64 
root@ubuntu-16gb-hel1-1:~# &lt;span class="nb"&gt;mv &lt;/span&gt;bootloose-linux-arm64 /usr/local/bin/bootloose
root@ubuntu-16gb-hel1-1:~# bootloose
bootloose - Container Machines

Usage:
  bootloose &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;command&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;

Available Commands:
  completion Generate the autocompletion script &lt;span class="k"&gt;for &lt;/span&gt;the specified shell
  config Manage cluster configuration
  create Create a cluster
  delete Delete a cluster
  &lt;span class="nb"&gt;help &lt;/span&gt;Help about any &lt;span class="nb"&gt;command
  &lt;/span&gt;show Show all running machines or a single machine with a given hostname.
  ssh SSH into a machine
  start Start cluster machines
  stop Stop cluster machines
  version Print bootloose version

Flags:
  &lt;span class="nt"&gt;-c&lt;/span&gt;, &lt;span class="nt"&gt;--config&lt;/span&gt; string Cluster configuration file &lt;span class="o"&gt;(&lt;/span&gt;default &lt;span class="s2"&gt;"bootloose.yaml"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="nt"&gt;-h&lt;/span&gt;, &lt;span class="nt"&gt;--help&lt;/span&gt; &lt;span class="nb"&gt;help &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;bootloose

Use &lt;span class="s2"&gt;"bootloose [command] --help"&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;more information about a command.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/k0sproject/bootloose/releases"&gt;Releases · k0sproject/bootloose&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Et création de de fichier de configuration YAML pour créer mes instances avec cette image :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sERJrHo4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AXGW6u6coU1MRcG_CdRQDjA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sERJrHo4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AXGW6u6coU1MRcG_CdRQDjA.png" alt="" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v3rcQxSx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/700/0%2AFi6gpjp-YY3DKdME.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v3rcQxSx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/700/0%2AFi6gpjp-YY3DKdME.jpg" alt="" width="700" height="394"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@ubuntu-16gb-hel1-1:~# bootloose config create &lt;span class="nt"&gt;--replicas&lt;/span&gt; 3 &lt;span class="nt"&gt;--image&lt;/span&gt; quay.io/k0sproject/bootloose-ubuntu22.04:latest

root@ubuntu-16gb-hel1-1:~# &lt;span class="nb"&gt;cat &lt;/span&gt;bootloose.yaml 
cluster:
  name: cluster
  privateKey: ~/.ssh/id_rsa
machines:
- count: 3
  spec:
    image: quay.io/k0sproject/bootloose-ubuntu22.04:latest
    name: node%d
    portMappings:
    - containerPort: 22
    privileged: &lt;span class="nb"&gt;true
    &lt;/span&gt;volumes:
    - &lt;span class="nb"&gt;type&lt;/span&gt;: volume
      destination: /var/lib/k0s

root@ubuntu-16gb-hel1-1:~# bootloose create
INFO[0000] Docker Image: quay.io/k0sproject/bootloose-ubuntu22.04:latest present locally 
INFO[0000] Creating machine: cluster-node0 ...          
INFO[0000] Creating machine: cluster-node1 ...          
INFO[0000] Creating machine: cluster-node2 ...  

root@ubuntu-16gb-hel1-1:~# docker ps &lt;span class="nt"&gt;-a&lt;/span&gt;
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f452da977dd6 quay.io/k0sproject/bootloose-ubuntu22.04:latest &lt;span class="s2"&gt;"/sbin/init"&lt;/span&gt; About a minute ago Up About a minute 0.0.0.0:32770-&amp;gt;22/tcp, :::32770-&amp;gt;22/tcp cluster-node2
790a6594d4a3 quay.io/k0sproject/bootloose-ubuntu22.04:latest &lt;span class="s2"&gt;"/sbin/init"&lt;/span&gt; About a minute ago Up About a minute 0.0.0.0:32769-&amp;gt;22/tcp, :::32769-&amp;gt;22/tcp cluster-node1
c674c7d3551f quay.io/k0sproject/bootloose-ubuntu22.04:latest &lt;span class="s2"&gt;"/sbin/init"&lt;/span&gt; About a minute ago Up About a minute 0.0.0.0:32768-&amp;gt;22/tcp, :::32768-&amp;gt;22/tcp cluster-node0

root@ubuntu-16gb-hel1-1:~# docker inspect &lt;span class="si"&gt;$(&lt;/span&gt;docker ps &lt;span class="nt"&gt;-aq&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;IPAddress
            &lt;span class="s2"&gt;"SecondaryIPAddresses"&lt;/span&gt;: null,
            &lt;span class="s2"&gt;"IPAddress"&lt;/span&gt;: &lt;span class="s2"&gt;"172.17.0.4"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"IPAddress"&lt;/span&gt;: &lt;span class="s2"&gt;"172.17.0.4"&lt;/span&gt;,
            &lt;span class="s2"&gt;"SecondaryIPAddresses"&lt;/span&gt;: null,
            &lt;span class="s2"&gt;"IPAddress"&lt;/span&gt;: &lt;span class="s2"&gt;"172.17.0.3"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"IPAddress"&lt;/span&gt;: &lt;span class="s2"&gt;"172.17.0.3"&lt;/span&gt;,
            &lt;span class="s2"&gt;"SecondaryIPAddresses"&lt;/span&gt;: null,
            &lt;span class="s2"&gt;"IPAddress"&lt;/span&gt;: &lt;span class="s2"&gt;"172.17.0.2"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"IPAddress"&lt;/span&gt;: &lt;span class="s2"&gt;"172.17.0.2"&lt;/span&gt;,

root@ubuntu-16gb-hel1-1:~# &lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;2..4&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;ssh root@172.17.0.&lt;span class="nv"&gt;$i&lt;/span&gt; &lt;span class="s1"&gt;'hostname'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done
&lt;/span&gt;Warning: Permanently added &lt;span class="s1"&gt;'172.17.0.2'&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;ED25519&lt;span class="o"&gt;)&lt;/span&gt; to the list of known hosts.
node0
Warning: Permanently added &lt;span class="s1"&gt;'172.17.0.3'&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;ED25519&lt;span class="o"&gt;)&lt;/span&gt; to the list of known hosts.
node1
Warning: Permanently added &lt;span class="s1"&gt;'172.17.0.4'&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;ED25519&lt;span class="o"&gt;)&lt;/span&gt; to the list of known hosts.
node2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Les instances sont donc accessibles en SSH et je vais utiliser k0sctl pour y créer un cluster Kubernetes via k0s:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/k0sproject/k0sctl"&gt;GitHub - k0sproject/k0sctl: A bootstrapping and management tool for k0s clusters.&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@ubuntu-16gb-hel1-1:~# &lt;span class="nb"&gt;chmod&lt;/span&gt; +x k0sctl-linux-arm64 
root@ubuntu-16gb-hel1-1:~# &lt;span class="nb"&gt;mv &lt;/span&gt;k0sctl-linux-arm64 /usr/local/bin/k0sctl
root@ubuntu-16gb-hel1-1:~# k0sctl
NAME:
   k0sctl - k0s cluster management tool

USAGE:
   k0sctl &lt;span class="o"&gt;[&lt;/span&gt;global options] &lt;span class="nb"&gt;command&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;command &lt;/span&gt;options] 

COMMANDS:
   version Output k0sctl version
   apply Apply a k0sctl configuration
   kubeconfig Output the admin kubeconfig of the cluster
   init Create a configuration template
   reset Remove traces of k0s from all of the hosts
   backup Take backup of existing clusters state
   config Configuration related sub-commands
   completion  
   &lt;span class="nb"&gt;help&lt;/span&gt;, h Shows a list of commands or &lt;span class="nb"&gt;help &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;one &lt;span class="nb"&gt;command

&lt;/span&gt;GLOBAL OPTIONS:
   &lt;span class="nt"&gt;--debug&lt;/span&gt;, &lt;span class="nt"&gt;-d&lt;/span&gt; Enable debug logging &lt;span class="o"&gt;(&lt;/span&gt;default: &lt;span class="nb"&gt;false&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;$DEBUG&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
   &lt;span class="nt"&gt;--trace&lt;/span&gt; Enable trace logging &lt;span class="o"&gt;(&lt;/span&gt;default: &lt;span class="nb"&gt;false&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;$TRACE&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
   &lt;span class="nt"&gt;--no-redact&lt;/span&gt; Do not hide sensitive information &lt;span class="k"&gt;in &lt;/span&gt;the output &lt;span class="o"&gt;(&lt;/span&gt;default: &lt;span class="nb"&gt;false&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
   &lt;span class="nt"&gt;--help&lt;/span&gt;, &lt;span class="nt"&gt;-h&lt;/span&gt; show &lt;span class="nb"&gt;help&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initialisation du cluster dans Bootloose via ce fichier de configuration YAML :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@ubuntu-16gb-hel1-1:~# k0sctl init &lt;span class="nt"&gt;--k0s&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; k0sctl.yaml

root@ubuntu-16gb-hel1-1:~# &lt;span class="nb"&gt;cat &lt;/span&gt;k0sctl.yaml 
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  hosts:
  - ssh:
      address: 172.17.0.2
      user: root
      port: 22
      keyPath: /root/.ssh/id_rsa
    role: controller
  - ssh:
      address: 172.17.0.3
      user: root
      port: 22
      keyPath: /root/.ssh/id_rsa
    role: worker
  - ssh:
      address: 172.17.0.4
      user: root
      port: 22
      keyPath: /root/.ssh/id_rsa
    role: worker
  k0s:
    version: null
    versionChannel: stable
    dynamicConfig: &lt;span class="nb"&gt;false
    &lt;/span&gt;config:
      apiVersion: k0s.k0sproject.io/v1beta1
      kind: Cluster
      metadata:
        name: k0s
      spec:
        api:
          k0sApiPort: 9443
          port: 6443
        installConfig:
          &lt;span class="nb"&gt;users&lt;/span&gt;:
            etcdUser: etcd
            kineUser: kube-apiserver
            konnectivityUser: konnectivity-server
            kubeAPIserverUser: kube-apiserver
            kubeSchedulerUser: kube-scheduler
        konnectivity:
          adminPort: 8133
          agentPort: 8132
        network:
          kubeProxy:
            disabled: &lt;span class="nb"&gt;false
            &lt;/span&gt;mode: iptables
          kuberouter:
            autoMTU: &lt;span class="nb"&gt;true
            &lt;/span&gt;mtu: 0
            peerRouterASNs: &lt;span class="s2"&gt;""&lt;/span&gt;
            peerRouterIPs: &lt;span class="s2"&gt;""&lt;/span&gt;
          podCIDR: 10.244.0.0/16
          provider: kuberouter
          serviceCIDR: 10.96.0.0/12
        podSecurityPolicy:
          defaultPolicy: 00-k0s-privileged
        storage:
          &lt;span class="nb"&gt;type&lt;/span&gt;: etcd
        telemetry:
          enabled: &lt;span class="nb"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lancement de la création du cluster k0s avec ce fichier de configuration :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@ubuntu-16gb-hel1-1:~# k0sctl apply &lt;span class="nt"&gt;--config&lt;/span&gt; k0sctl.yaml 

⠀⣿⣿⡇⠀⠀⢀⣴⣾⣿⠟⠁⢸⣿⣿⣿⣿⣿⣿⣿⡿⠛⠁⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀█████████ █████████ ███
⠀⣿⣿⡇⣠⣶⣿⡿⠋⠀⠀⠀⢸⣿⡇⠀⠀⠀⣠⠀⠀⢀⣠⡆⢸⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀███ ███ ███
⠀⣿⣿⣿⣿⣟⠋⠀⠀⠀⠀⠀⢸⣿⡇⠀⢰⣾⣿⠀⠀⣿⣿⡇⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀███ ███ ███
⠀⣿⣿⡏⠻⣿⣷⣤⡀⠀⠀⠀⠸⠛⠁⠀⠸⠋⠁⠀⠀⣿⣿⡇⠈⠉⠉⠉⠉⠉⠉⠉⠉⢹⣿⣿⠀███ ███ ███
⠀⣿⣿⡇⠀⠀⠙⢿⣿⣦⣀⠀⠀⠀⣠⣶⣶⣶⣶⣶⣶⣿⣿⡇⢰⣶⣶⣶⣶⣶⣶⣶⣶⣾⣿⣿⠀█████████ ███ ██████████
k0sctl v0.17.5 Copyright 2023, k0sctl authors.
Anonymized telemetry of usage will be sent to the authors.
By continuing to use k0sctl you agree to these terms:
https://k0sproject.io/licenses/eula
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Set k0s version  
INFO Looking up latest stable k0s version         
INFO Using k0s version v1.29.2+k0s.0              
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Connect to hosts 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.4:22: connected               
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.3:22: connected               
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.2:22: connected               
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Detect host operating systems 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.4:22: is running Ubuntu 22.04.3 LTS 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.2:22: is running Ubuntu 22.04.3 LTS 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.3:22: is running Ubuntu 22.04.3 LTS 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Acquire exclusive host lock 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Prepare hosts    
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.2:22: is a container, applying a fix 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.3:22: is a container, applying a fix 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.4:22: is a container, applying a fix 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Gather host facts 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.4:22: using node2 as &lt;span class="nb"&gt;hostname 
&lt;/span&gt;INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.3:22: using node1 as &lt;span class="nb"&gt;hostname 
&lt;/span&gt;INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.2:22: using node0 as &lt;span class="nb"&gt;hostname 
&lt;/span&gt;INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.4:22: discovered eth0 as private interface 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.3:22: discovered eth0 as private interface 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.2:22: discovered eth0 as private interface 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Validate hosts   
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Validate facts   
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Download k0s on hosts 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.4:22: downloading k0s v1.29.2+k0s.0 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.3:22: downloading k0s v1.29.2+k0s.0 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.2:22: downloading k0s v1.29.2+k0s.0 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Install k0s binaries on hosts 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.2:22: validating configuration 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Configure k0s    
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.2:22: installing new configuration 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Initialize the k0s cluster 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.2:22: installing k0s controller 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.2:22: waiting &lt;span class="k"&gt;for &lt;/span&gt;the k0s service to start 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.2:22: waiting &lt;span class="k"&gt;for &lt;/span&gt;kubernetes api to respond 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Install workers  
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.3:22: validating api connection to https://172.17.0.2:6443 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.4:22: validating api connection to https://172.17.0.2:6443 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.2:22: generating a &lt;span class="nb"&gt;join &lt;/span&gt;token &lt;span class="k"&gt;for &lt;/span&gt;worker 1 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.2:22: generating a &lt;span class="nb"&gt;join &lt;/span&gt;token &lt;span class="k"&gt;for &lt;/span&gt;worker 2 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.4:22: writing &lt;span class="nb"&gt;join &lt;/span&gt;token      
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.3:22: writing &lt;span class="nb"&gt;join &lt;/span&gt;token      
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.3:22: installing k0s worker   
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.4:22: installing k0s worker   
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.4:22: starting service        
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.3:22: starting service        
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.4:22: waiting &lt;span class="k"&gt;for &lt;/span&gt;node to become ready 
INFO &lt;span class="o"&gt;[&lt;/span&gt;ssh] 172.17.0.3:22: waiting &lt;span class="k"&gt;for &lt;/span&gt;node to become ready 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Release exclusive host lock 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running phase: Disconnect from hosts 
INFO &lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Finished &lt;span class="k"&gt;in &lt;/span&gt;48s                 
INFO k0s cluster version v1.29.2+k0s.0 is now installed 
INFO Tip: To access the cluster you can now fetch the admin kubeconfig using: 
INFO k0sctl kubeconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Je peux alors récupérer le fichier kubeconfig pour l’utilisation locale du client kubectl :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@ubuntu-16gb-hel1-1:~# &lt;span class="nb"&gt;mkdir&lt;/span&gt; .kube
root@ubuntu-16gb-hel1-1:~# k0sctl kubeconfig &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; .kube/config
root@ubuntu-16gb-hel1-1:~# &lt;span class="nb"&gt;cat&lt;/span&gt; .kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: &lt;span class="nv"&gt;LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURBRENDQWVpZ0F3SUJBZ0lVSnhLUU9xRlVzZ0NnY0FiTWNDdEFSTXlmNDFVd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0dERVdNQlFHQTFVRUF4TU5hM1ZpWlhKdVpYUmxjeTFqWVRBZUZ3MHlOREF6TXpFeE5qQXhNREJhRncwegpOREF6TWpreE5qQXhNREJhTUJneEZqQVVCZ05WQkFNVERXdDFZbVZ5Ym1WMFpYTXRZMkV3Z2dFaU1BMEdDU3FHClNJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURXM3V4cVZtMUdMWWNJWlozMDIrajcwTXQ3Y1Y1Zkw3ZmYKcDNwaURhUVk1d1RGMHVmMEVWcDZqQXV5TVpNaGtWTzlSSmo2TGYyb0xNcVV1V0ZqVzFKdVZUVHY5U0JDc1FhVApudVpxVktyWnJ0Nkt5bm1zaVBuWTlMZW5wWklkdzA4NUZnOHdWdGFlclVmOUlaaUMrSEZPL1grdjlIOUFySlJJCjVNby80dzBaZEJjdXVUcEREVzZpcTNNMjF2b1pCR3Jwbk43TkcvWUpLcGNwM2xlbVV5d1JHL0dEMisyTlRML08KdEx5cmF4TDVWVkVYbENEbTFlTXJZOXBzSjNQcDJySmE0RmxyMGRVVFJRUHcycjhBUUxyTFhJUmFBYXpEV3lBLwpKR1NKZG81YUdrL3NxM2l3clRJb3c0UmlwRGRQWmV2UkdZSlRLaWZWOStRYk14bkVuUEZSQWdNQkFBR2pRakJBCk1BNEdBMVVkRHdFQi93UUVBd0lCQmpBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJScFl0V2MKMERrSlRaTDFxR0tkK284N1ZndTFBakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBbWtGVTBYam14ZWRPMDFtegozWU9QS3hBeTg2N1NpMlM0WTRMcDNqL3NCcW9TWVlkYldzS3c2Wmx1aTdra2VYUEMvM0pJRTd4YjNoL0p6WWxvCnRZMEliOEpnUnZvVk1sV0ZHNitxMnNnZEo3cmx6blg1QXBQZkRqUXY5RUJHR3VXM21IY1UxbXVCUStrUG9JK2cKaU5OZzFITjRoYllYMVJOMitOK3pKcVNVaWh1WFZDbVhFa2YwQ05WU3VyTEZ6cGNKWWJpNWowVVcwYmRhVWFpTwpJRDJOd3VZcUhDNS96RDM0RDB6T1RVMzRtUTVsZDRUYUx4ZmlwRzZrWDVVSEJtT0U5d3lIbVZuY2VzelJ0alZTCnlFblNJV0g1cmM5Sk84RVBXLzVPRTJsb2E3a240U05iQU5NbUNFaVZtZE9IdU5NSHFGaUQxR2oxOWI5MzVpSnkKUnpGS2hBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
    server: https://172.17.0.2:6443
  name: k0s-cluster
contexts:
- context:
    cluster: k0s-cluster
    user: admin
  name: k0s-cluster
current-context: k0s-cluster
kind: Config
preferences: &lt;span class="o"&gt;{}&lt;/span&gt;
&lt;span class="nb"&gt;users&lt;/span&gt;:
- name: admin
  user:
    client-certificate-data: &lt;span class="nv"&gt;LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURXVENDQWtHZ0F3SUJBZ0lVY3lhN3dQUGVIR0c1eUM3WWdUV1FCZnlNM2lJd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0dERVdNQlFHQTFVRUF4TU5hM1ZpWlhKdVpYUmxjeTFqWVRBZUZ3MHlOREF6TXpFeE5qQXhNREJhRncweQpOVEF6TXpFeE5qQXhNREJhTURReEZ6QVZCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVJrd0Z3WURWUVFECkV4QnJkV0psY201bGRHVnpMV0ZrYldsdU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0MKQVFFQTFrbjJONVorWEt0MW50N3A1QSt2YWQ5OU9DQ25DQ1FKN1JDeXUzTlRaZXlPWDlwK2lRNzBqaC9mWTZjagovNElFMDVDQlJNZzU1cGtVSWszc0k5anJqNFhBcmRaSnQ1QTFUbDZFY0tuWEQ2NzJyWFV4bVlsRG9FK3p6SitkClc1VnVkc0JmbXVqRnluTGJyT2xDYzZEMDlOWmo5b24zRVlGK0dhcXFBTG5wOU9mNENkaFdLL3hYcWVabFBPaFMKZ2RLdERYVzRXYjlEVEZpbytXQ2RObmR6NWd5SUF5OTM3akRzUG9ESEl2OXdEMkVkd2JSVzNMUERjRjhFaFpKWQpUcUV1Z2F0bzB6ek1vaVZzdkptNVZSU0ZjS2h6S1NkZW1obnFyaTBLWXFFTG9Oa2QwcnFkbFF2cnhCSUlvd1E1Cnl4TWhnOHhRb3pDSUtVZXY5SVhoVTlLT2VRSURBUUFCbzM4d2ZUQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWUQKVlIwbEJCWXdGQVlJS3dZQkJRVUhBd0VHQ0NzR0FRVUZCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwTwpCQllFRk5SbjV4ek1qVXZSeFVtRXdmbzBLVm45cFgzME1COEdBMVVkSXdRWU1CYUFGR2xpMVp6UU9RbE5rdldvCllwMzZqenRXQzdVQ01BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjRqU29XNmpnZmRkMlFGM0FRNWxJckVSMFkKQVJSV1RtNTdtTkZPM041OE1PZXR1UDAzOWxvSnIrbDY4aXVQcEUyd2VoL1hTNGxrM3B6dSszSS9IVDNlNW1PcwpiU0pGMXZWbHlaZnZTd1BveEMxcVBPakR3R28xQW1xUUpYRzhjc1E1WWVPR3lVN3BjdjY0KzRrL2xXQmovSEliCmQ4ZnVkMzh4cUo5SUV5SXdBU0lVenhVbllPZ1ZyOWlzRGtxTlk4czBtc3loQXpFYkNQdDQ2NFhiU0pqWG40aSsKOTFVK2FFZG1BMHN5Mi9jc2h6TFZCYzlOanJER1Yya0ZTL2hQNU9YZ0ZycEpEcnNnRVVOcjRudzNRaEl0RkJvdAp4Q2VXYkxTdXFPWTJjcnVvU1hWdmMrdFkyV0xWZ0RQVDRkZ3E3UGhhZm5oTUJkNmxyME55WFNuVzdOWFQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
    client-key-data: &lt;span class="nv"&gt;LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBMWtuMk41WitYS3QxbnQ3cDVBK3ZhZDk5T0NDbkNDUUo3UkN5dTNOVFpleU9YOXArCmlRNzBqaC9mWTZjai80SUUwNUNCUk1nNTVwa1VJazNzSTlqcmo0WEFyZFpKdDVBMVRsNkVjS25YRDY3MnJYVXgKbVlsRG9FK3p6SitkVzVWdWRzQmZtdWpGeW5MYnJPbENjNkQwOU5aajlvbjNFWUYrR2FxcUFMbnA5T2Y0Q2RoVwpLL3hYcWVabFBPaFNnZEt0RFhXNFdiOURURmlvK1dDZE5uZHo1Z3lJQXk5MzdqRHNQb0RISXY5d0QyRWR3YlJXCjNMUERjRjhFaFpKWVRxRXVnYXRvMHp6TW9pVnN2Sm01VlJTRmNLaHpLU2RlbWhucXJpMEtZcUVMb05rZDBycWQKbFF2cnhCSUlvd1E1eXhNaGc4eFFvekNJS1VldjlJWGhVOUtPZVFJREFRQUJBb0lCQUFhR3dpVDNSR24ySHVMegp6eFBQRm55Vy9lMVRzVUtpTmxzdUF3T0tnNk83REtzR3NJdmtGTGF2YWRKVEtObURVRHBSVUY2VDZvK0hZZ0daCmRmT3hpNXNYYThMZm4rY2pVVHhOektMUnlXY0U0U1p2UjA5eHlzbDdJL0s3ZWNOc1RhejROdkUwM2JGSXhrQUIKNnJBeTJzTUtOSWt4c29DcC9Qa3pKWEpZTnpQcVBtL05Kb3hja1h5KzVXRUxiY25Od2lxUUZzU09zT21ldEZ3YQpYcTZrSkVYRzIwVGpvQjg0SlFsNldNN0M5TFp0TWUrcnpsRGdsZFl1ZU92M1FWSE9FU0d2T0VuTFh3TEVSdkg3CmpPbUs4L040dU53L0NpTXJlUjh0WWR2dVBQNVA2Z1V5OUdmdXc2bkM4SzlTOE9CSVZmOWpxRjlsdG9wZndhNVMKRmdCd1BnRUNnWUVBNEI4T0hVZmp1ajR1amNyU01RYm0xQzRhN0QxckJ3YmxuY2RuNFlSYndpKzhOdmhEcGt5SQowMjBod1czRWpUTGJTSTB5UGJvNnQ4MWJEZk1xcjdudm1CblBZTDN2T1lmY3hDRklMcWJwSHd3M1h3aTBsM1BuCmhKVkZyaGJZckthSlVNcDZXVW9zTGdpVDE3MWx2ZnRRZFd5WGpWaGhLQ25SVTJSTExIOEg5NUVDZ1lFQTlNVGgKQWMyaTVSTnRzWDFFOTBFQjZUK0Rxekd3WkQ5WG1YVFQ4M3hrSzVENWV5UzIyUFpGa0NVU3pTSUZtVkpHSzh1ZwpuL2tQQTVZbWMzQUNiWEptVHlDQStSTkxhMU5TTDhGNXN4RzJ3Ris4OXNLVDBhT3VnNE5Ld1RHYkd6QVFKSjN0CitndmcwYXhJQW04b2RrU1RMbTg3K2N0UDVWK2QyNGNHVmh4VHhHa0NnWUVBaE5uancvZVpSZzBXQzNidW9hRTEKc3hDaFpPZ0RTV2NOTlRtK21pK2JOTUNYRVA2Wkd6ckM0SkVRTVZpZjZoTDdhVVpKUWMzaWdKRjZLQXE4Z3UzMAoySFIwT1NSZGFmemZJR09hSmcxS290emE5YnB6VWxPaUtUVlMySjh5VVNWbXdEMUZ5U2Z1aUZzTlNCVTgraUMrCjBOeE4rYnNwM0dUdGNFRkRUbHorbkFFQ2dZQXBOTS9RZHViNmU2Zlczb2p5dXgzd1A0SVNHdjVnRWczVVJzZTcKMFBBb0tYTG1tVXF6QWRxNkpwT2d0eDZNTGo2ajl0Ym10NDRnZzNHYnMzcWxKRUkzQmZqUWRjQVhwR0pNcWR5cwpHY3BUWG9xNFhBOTRsbjYxb3krOWtIVlZRV1VtVlBRUVNWbWkwc2NZcWMvOUFSUnFGODNZQXJORG9USTVGK1VvCk1BS21LUUtCZ1FDZ0tSOEpYMkpOakR6QjVWd3lwdmo3SDdFNFV4Q1FBVXdZZ1JVZ0VCUThZZElPUitSWTFuREEKUWRLd1NOaWxodWxrTjFERFVCZmZqUHhJWndTNExJZEkyN0lqTHlqMFUvbDFrMkludU94OFdHck1hUnp6WXRGcgpTTjhQV2x6eERZTHNHaVpyR2MzVXk0Znd6Ty9lV2xFUFpod0NRVk5lTFpHcFRCYVpvbUIxQkE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;

root@ubuntu-16gb-hel1-1:~# curl &lt;span class="nt"&gt;-LO&lt;/span&gt; https://dl.k8s.io/release/v1.29.2/bin/linux/arm64/kubectl &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chmod&lt;/span&gt; +x kubectl &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;mv &lt;/span&gt;kubectl /usr/local/bin/

root@ubuntu-16gb-hel1-1:~# &lt;span class="nb"&gt;chmod &lt;/span&gt;400 .kube/config
root@ubuntu-16gb-hel1-1:~# kubectl cluster-info
Kubernetes control plane is running at https://172.17.0.2:6443
CoreDNS is running at https://172.17.0.2:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use &lt;span class="s1"&gt;'kubectl cluster-info dump'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Le cluster est alors opérationnel et actif avec deux noeuds Workers :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@ubuntu-16gb-hel1-1:~# kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready &amp;lt;none&amp;gt; 4m58s v1.29.2+k0s 172.17.0.3 &amp;lt;none&amp;gt; Ubuntu 22.04.3 LTS 5.15.0-100-generic containerd://1.7.13
node2 Ready &amp;lt;none&amp;gt; 4m58s v1.29.2+k0s 172.17.0.4 &amp;lt;none&amp;gt; Ubuntu 22.04.3 LTS 5.15.0-100-generic containerd://1.7.13

root@ubuntu-16gb-hel1-1:~# kubectl get po,svc &lt;span class="nt"&gt;-A&lt;/span&gt;
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-555d98c87b-7dxxh 1/1 Running 0 5m
kube-system pod/coredns-555d98c87b-9ngzm 1/1 Running 0 5m
kube-system pod/konnectivity-agent-4fb6m 1/1 Running 0 5m5s
kube-system pod/konnectivity-agent-kj99g 1/1 Running 0 5m5s
kube-system pod/kube-proxy-6pbnw 1/1 Running 0 5m5s
kube-system pod/kube-proxy-xg4mh 1/1 Running 0 5m5s
kube-system pod/kube-router-tvzsb 1/1 Running 0 5m5s
kube-system pod/kube-router-wt97k 1/1 Running 0 5m5s
kube-system pod/metrics-server-7556957bb7-qtgd2 1/1 Running 0 5m6s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
default service/kubernetes ClusterIP 10.96.0.1 &amp;lt;none&amp;gt; 443/TCP 5m23s
kube-system service/kube-dns ClusterIP 10.96.0.10 &amp;lt;none&amp;gt; 53/UDP,53/TCP,9153/TCP 5m14s
kube-system service/metrics-server ClusterIP 10.100.127.105 &amp;lt;none&amp;gt; 443/TCP 5m10s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Je vais utiliser l’image Docker du projet Kasmweb pour obtenir un Desktop en ligne au sein de ce cluster :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://hub.docker.com/r/kasmweb/ubuntu-jammy-dind"&gt;Docker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kasmweb.com/docs/latest/guide/custom_images.html#rolling-docker-images"&gt;Default Docker Images - Kasm 1.15.0 documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D8EMkyAQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ANS8iJRg5CP2j1kb2h_2g8A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D8EMkyAQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ANS8iJRg5CP2j1kb2h_2g8A.png" alt="" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pour cela conversion de la formule donnée dans le Docker Hub en fichier YAML pour docker-compose via Composerize :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.composerize.com/"&gt;Composerize&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Vzlu-uuv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AaZ-mIuYCwR7DN3lf-Emd7Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Vzlu-uuv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AaZ-mIuYCwR7DN3lf-Emd7Q.png" alt="" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Puis conversion en manifests YAML via Kompose:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@ubuntu-16gb-hel1-1:~# &lt;span class="nb"&gt;cat &lt;/span&gt;docker-compose.yml 
name: ubuntu
services:
    ubuntu-jammy-dind:
        stdin_open: &lt;span class="nb"&gt;true
        tty&lt;/span&gt;: &lt;span class="nb"&gt;true
        &lt;/span&gt;shm_size: 512m
        ports:
            - 6901:6901
        environment:
            - &lt;span class="nv"&gt;VNC_PW&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;password
        image: kasmweb/ubuntu-jammy-dind:1.14.0-rolling

root@ubuntu-16gb-hel1-1:~# ./kompose convert
INFO Kubernetes file &lt;span class="s2"&gt;"ubuntu-jammy-dind-service.yaml"&lt;/span&gt; created 
INFO Kubernetes file &lt;span class="s2"&gt;"ubuntu-jammy-dind-deployment.yaml"&lt;/span&gt; created 

root@ubuntu-16gb-hel1-1:~# &lt;span class="nb"&gt;cat &lt;/span&gt;ubuntu-jammy-dind-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: ./kompose convert
    kompose.version: 1.32.0 &lt;span class="o"&gt;(&lt;/span&gt;765fde254&lt;span class="o"&gt;)&lt;/span&gt;
  labels:
    io.kompose.service: ubuntu-jammy-dind
  name: ubuntu-jammy-dind
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: ubuntu-jammy-dind
  template:
    metadata:
      annotations:
        kompose.cmd: ./kompose convert
        kompose.version: 1.32.0 &lt;span class="o"&gt;(&lt;/span&gt;765fde254&lt;span class="o"&gt;)&lt;/span&gt;
      labels:
        io.kompose.network/ubuntu-default: &lt;span class="s2"&gt;"true"&lt;/span&gt;
        io.kompose.service: ubuntu-jammy-dind
    spec:
      containers:
        - &lt;span class="nb"&gt;env&lt;/span&gt;:
            - name: VNC_PW
              value: password
          image: kasmweb/ubuntu-jammy-dind:1.14.0-rolling
          name: ubuntu-jammy-dind
          ports:
            - containerPort: 6901
              hostPort: 6901
              protocol: TCP
          stdin: &lt;span class="nb"&gt;true
          tty&lt;/span&gt;: &lt;span class="nb"&gt;true
      &lt;/span&gt;restartPolicy: Always

root@ubuntu-16gb-hel1-1:~# &lt;span class="nb"&gt;cat &lt;/span&gt;ubuntu-jammy-dind-service.yaml 
apiVersion: v1
kind: Service
metadata:
  annotations:
    kompose.cmd: ./kompose convert
    kompose.version: 1.32.0 &lt;span class="o"&gt;(&lt;/span&gt;765fde254&lt;span class="o"&gt;)&lt;/span&gt;
  labels:
    io.kompose.service: ubuntu-jammy-dind
  name: ubuntu-jammy-dind
spec:
  ports:
    - name: &lt;span class="s2"&gt;"6901"&lt;/span&gt;
      port: 6901
      targetPort: 6901
  selector:
    io.kompose.service: ubuntu-jammy-dind
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Déploiement de ces derniers dans le cluster k0s…&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
root@ubuntu-16gb-hel1-1:~# kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ubuntu-jammy-dind-deployment.yaml 
deployment.apps/ubuntu-jammy-dind created
root@ubuntu-16gb-hel1-1:~# kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ubuntu-jammy-dind-service.yaml 
service/ubuntu-jammy-dind created

root@ubuntu-16gb-hel1-1:~# kubectl get po,svc &lt;span class="nt"&gt;-A&lt;/span&gt;
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/ubuntu-jammy-dind-5d84c9847f-vhftx 1/1 Running 0 3m37s
kube-system pod/coredns-555d98c87b-7dxxh 1/1 Running 0 24m
kube-system pod/coredns-555d98c87b-9ngzm 1/1 Running 0 24m
kube-system pod/konnectivity-agent-4fb6m 1/1 Running 0 24m
kube-system pod/konnectivity-agent-kj99g 1/1 Running 0 24m
kube-system pod/kube-proxy-6pbnw 1/1 Running 0 24m
kube-system pod/kube-proxy-xg4mh 1/1 Running 0 24m
kube-system pod/kube-router-tvzsb 1/1 Running 0 24m
kube-system pod/kube-router-wt97k 1/1 Running 0 24m
kube-system pod/metrics-server-7556957bb7-qtgd2 1/1 Running 0 24m

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
default service/kubernetes ClusterIP 10.96.0.1 &amp;lt;none&amp;gt; 443/TCP 24m
default service/ubuntu-jammy-dind ClusterIP 10.97.43.198 &amp;lt;none&amp;gt; 6901/TCP 3m30s
kube-system service/kube-dns ClusterIP 10.96.0.10 &amp;lt;none&amp;gt; 53/UDP,53/TCP,9153/TCP 24m
kube-system service/metrics-server ClusterIP 10.100.127.105 &amp;lt;none&amp;gt; 443/TCP 24m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;J’expose le service via kubectl pour accéder au Desktop via NoVNC et les identifiants par défaut :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
root@ubuntu-16gb-hel1-1:~# kubectl port-forward svc/ubuntu-jammy-dind 6901:6901 --address='0.0.0.0'
Forwarding from 0.0.0.0:6901 -&amp;gt; 6901
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FStGDic5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2Ak1KxTss0lZSEc6ERX5Oeng.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FStGDic5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2Ak1KxTss0lZSEc6ERX5Oeng.png" alt="" width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j6LqFzZo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ADlKO4urSYceyHDvkXeuSxA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j6LqFzZo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ADlKO4urSYceyHDvkXeuSxA.png" alt="" width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--l7xzQOad--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AbQoC1TdQ9uDZRok8WgsJSw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--l7xzQOad--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AbQoC1TdQ9uDZRok8WgsJSw.png" alt="" width="800" height="687"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Comme dans l’article précédent, récupération d’Ollama en test …&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -fsSL https://ollama.com/install.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/ollama/ollama"&gt;GitHub - ollama/ollama: Get up and running with Llama 2, Mistral, Gemma, and other large language models.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ollama.com/library?sort=newest"&gt;library&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zD7u_wD2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AgPAyw9iANitlQs4T2vnVxA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zD7u_wD2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AgPAyw9iANitlQs4T2vnVxA.png" alt="" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;avec Llama2 :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5o7LF6qA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A4jQc0QRgLm47GJsB7m8XGA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5o7LF6qA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A4jQc0QRgLm47GJsB7m8XGA.png" alt="" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_DdjkMCu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2Ai3GTmWgsdf_h1xFChh9sDg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_DdjkMCu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2Ai3GTmWgsdf_h1xFChh9sDg.png" alt="" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;D’autres exemples d’utilisation peuvent se retrouver dans le dépôt GitHub avec notamment Ansible ou Ignite …&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/k0sproject/bootloose/tree/main/examples"&gt;bootloose/examples at main · k0sproject/bootloose&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/k0sproject/k0sctl/tree/main/examples/bootloose"&gt;k0sctl/examples/bootloose at main · k0sproject/k0sctl&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mlrYbwnH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/888/0%2AcLK9IPxsEhzb31H-.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mlrYbwnH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/888/0%2AcLK9IPxsEhzb31H-.jpg" alt="" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;À suivre !&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>kubernetes</category>
      <category>llm</category>
    </item>
    <item>
      <title>Installer Canonical Kubernetes dans LXD avec Juju …</title>
      <dc:creator>Karim</dc:creator>
      <pubDate>Sun, 31 Mar 2024 14:48:42 +0000</pubDate>
      <link>https://forem.com/deep75/installer-canonical-kubernetes-dans-lxd-avec-juju--3ncd</link>
      <guid>https://forem.com/deep75/installer-canonical-kubernetes-dans-lxd-avec-juju--3ncd</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P8hmgWsS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A8Yn7ODjvNPdTK9xvWjHCBw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P8hmgWsS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A8Yn7ODjvNPdTK9xvWjHCBw.png" alt="" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Canonical Kubernetes que l’on a pu aborder dans l’article précédent,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/deep75/k8s-snap-canonical-kubernetes-pour-un-deploiement-simple-et-rapide-dun-cluster-k8s--4hi5"&gt;k8s-snap (Canonical Kubernetes) pour un déploiement simple et rapide d’un cluster k8s …&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;peut également être installé dans LXD. C’est un excellent moyen, par exemple, de tester Canonical Kubernetes en cluster sans avoir besoin de plusieurs hôtes physiques.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://ubuntu.com/blog/lxd-5-easy-pieces"&gt;LXD: five easy pieces | Ubuntu&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A0b8tK72--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AduocFt98fgPLpc0U.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A0b8tK72--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AduocFt98fgPLpc0U.png" alt="" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Je pars donc d’une simple instance Ubuntu 22.04 LTS dans DigitalOcean :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s1"&gt;'Authorization: Bearer '&lt;/span&gt;&lt;span class="nv"&gt;$TOKEN&lt;/span&gt;&lt;span class="s1"&gt;''&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"name":"lxd",
        "size":"s-4vcpu-16gb-amd",
        "region":"fra1",
        "image":"ubuntu-22-04-x64",
        "vpc_uuid":"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"}'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"https://api.digitalocean.com/v2/droplets"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gyovk0lm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AP5E2FNUOZ2BGQ2LcczaK2w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gyovk0lm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AP5E2FNUOZ2BGQ2LcczaK2w.png" alt="" width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;et j’y installe et initie LXD ainsi que de l’orchestrateur Juju :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@lxd:~# snap remove lxd &lt;span class="nt"&gt;--purge&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; snap &lt;span class="nb"&gt;install &lt;/span&gt;lxd &lt;span class="nt"&gt;--candidate&lt;/span&gt;
lxd removed
lxd &lt;span class="o"&gt;(&lt;/span&gt;candidate&lt;span class="o"&gt;)&lt;/span&gt; 5.21.0-eb655b9 from Canonical✓ installed
root@lxd:~# snap list
Name Version Rev Tracking Publisher Notes
core20 20230207 1828 latest/stable canonical✓ base
core22 20240111 1122 latest/stable canonical✓ base
lxd 5.21.0-eb655b9 27957 latest/candidate canonical✓ -
snapd 2.58.2 18357 latest/stable canonical✓ snapd

root@lxd:~# lxd init
Would you like to use LXD clustering? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;no]: 
Do you want to configure a new storage pool? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;: 
Name of the new storage pool &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;default]: 
Name of the storage backend to use &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;dir&lt;/span&gt;, lvm, powerflex, zfs, btrfs, ceph&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;zfs]: &lt;span class="nb"&gt;dir
&lt;/span&gt;Would you like to connect to a MAAS server? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;no]: 
Would you like to create a new &lt;span class="nb"&gt;local &lt;/span&gt;network bridge? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;: 
What should the new bridge be called? &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;lxdbr0]: 
What IPv4 address should be used? &lt;span class="o"&gt;(&lt;/span&gt;CIDR subnet notation, “auto” or “none”&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;auto]: 
What IPv6 address should be used? &lt;span class="o"&gt;(&lt;/span&gt;CIDR subnet notation, “auto” or “none”&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;auto]: none
Would you like the LXD server to be available over the network? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;no]: 
Would you like stale cached images to be updated automatically? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;: 
Would you like a YAML &lt;span class="s2"&gt;"lxd init"&lt;/span&gt; preseed to be printed? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;no]: 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DiEnbYbD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AuFEnpXiimTcBHrHk9ntjpw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DiEnbYbD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AuFEnpXiimTcBHrHk9ntjpw.png" alt="" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://juju.is/"&gt;The simplest way to deploy and maintain applications in the cloud&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@lxd:~# snap &lt;span class="nb"&gt;install &lt;/span&gt;juju
juju &lt;span class="o"&gt;(&lt;/span&gt;3/stable&lt;span class="o"&gt;)&lt;/span&gt; 3.4.0 from Canonical✓ installed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Edition du profil par défaut dans LXD pour lui permettre de prendre en charge Canonical Kubernetes avec ce modèle …&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://documentation.ubuntu.com/canonical-kubernetes/main/howto/install/lxd/"&gt;Install Canonical Kubernetes in LXD&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;name: default
description: Default LXD profile
config:
  boot.autostart: &lt;span class="s2"&gt;"true"&lt;/span&gt;
  linux.kernel_modules: ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,iptable_raw,netlink_diag,nf_nat,overlay,br_netfilter,xt_socket
  raw.lxc: |
    lxc.apparmor.profile&lt;span class="o"&gt;=&lt;/span&gt;unconfined
    lxc.mount.auto&lt;span class="o"&gt;=&lt;/span&gt;proc:rw sys:rw cgroup:rw
    lxc.cgroup.devices.allow&lt;span class="o"&gt;=&lt;/span&gt;a
    lxc.cap.drop&lt;span class="o"&gt;=&lt;/span&gt;
  security.nesting: &lt;span class="s2"&gt;"true"&lt;/span&gt;
  security.privileged: &lt;span class="s2"&gt;"true"&lt;/span&gt;
devices:
  aadisable2:
    path: /dev/kmsg
    &lt;span class="nb"&gt;source&lt;/span&gt;: /dev/kmsg
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-char
  dev-loop-control:
    major: &lt;span class="s2"&gt;"10"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"237"&lt;/span&gt;
    path: /dev/loop-control
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-char
  dev-loop0:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"0"&lt;/span&gt;
    path: /dev/loop0
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop1:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"1"&lt;/span&gt;
    path: /dev/loop1
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop2:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"2"&lt;/span&gt;
    path: /dev/loop2
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop3:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"3"&lt;/span&gt;
    path: /dev/loop3
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop4:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"4"&lt;/span&gt;
    path: /dev/loop4
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop5:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"5"&lt;/span&gt;
    path: /dev/loop5
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop6:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"6"&lt;/span&gt;
    path: /dev/loop6
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop7:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    path: /dev/loop7
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop8:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"8"&lt;/span&gt;
    path: /dev/loop8
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop9:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"9"&lt;/span&gt;
    path: /dev/loop9
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop10:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"10"&lt;/span&gt;
    path: /dev/loop10
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop11:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"11"&lt;/span&gt;
    path: /dev/loop11
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop12:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"12"&lt;/span&gt;
    path: /dev/loop12
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop13:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"13"&lt;/span&gt;
    path: /dev/loop13
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop14:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"14"&lt;/span&gt;
    path: /dev/loop14
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop15:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"15"&lt;/span&gt;
    path: /dev/loop15
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop16:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"16"&lt;/span&gt;
    path: /dev/loop16
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  eth0:
    name: eth0
    network: lxdbr0
    &lt;span class="nb"&gt;type&lt;/span&gt;: nic
  root:
    path: /
    pool: default
    &lt;span class="nb"&gt;type&lt;/span&gt;: disk
used_by: &lt;span class="o"&gt;[]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Application de ce profil par défaut dans LXD :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@lxd:~# lxc profile edit default &amp;lt; lxd-profile-default.yaml

root@lxd:~# lxc profile show default
name: default
description: Default LXD profile
config:
  boot.autostart: &lt;span class="s2"&gt;"true"&lt;/span&gt;
  linux.kernel_modules: ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,iptable_raw,netlink_diag,nf_nat,overlay,br_netfilter,xt_socket
  raw.lxc: |
    lxc.apparmor.profile&lt;span class="o"&gt;=&lt;/span&gt;unconfined
    lxc.mount.auto&lt;span class="o"&gt;=&lt;/span&gt;proc:rw sys:rw cgroup:rw
    lxc.cgroup.devices.allow&lt;span class="o"&gt;=&lt;/span&gt;a
    lxc.cap.drop&lt;span class="o"&gt;=&lt;/span&gt;
  security.nesting: &lt;span class="s2"&gt;"true"&lt;/span&gt;
  security.privileged: &lt;span class="s2"&gt;"true"&lt;/span&gt;
devices:
  aadisable2:
    path: /dev/kmsg
    &lt;span class="nb"&gt;source&lt;/span&gt;: /dev/kmsg
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-char
  dev-loop-control:
    major: &lt;span class="s2"&gt;"10"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"237"&lt;/span&gt;
    path: /dev/loop-control
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-char
  dev-loop0:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"0"&lt;/span&gt;
    path: /dev/loop0
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop1:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"1"&lt;/span&gt;
    path: /dev/loop1
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop2:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"2"&lt;/span&gt;
    path: /dev/loop2
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop3:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"3"&lt;/span&gt;
    path: /dev/loop3
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop4:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"4"&lt;/span&gt;
    path: /dev/loop4
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop5:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"5"&lt;/span&gt;
    path: /dev/loop5
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop6:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"6"&lt;/span&gt;
    path: /dev/loop6
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop7:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    path: /dev/loop7
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop8:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"8"&lt;/span&gt;
    path: /dev/loop8
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop9:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"9"&lt;/span&gt;
    path: /dev/loop9
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop10:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"10"&lt;/span&gt;
    path: /dev/loop10
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop11:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"11"&lt;/span&gt;
    path: /dev/loop11
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop12:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"12"&lt;/span&gt;
    path: /dev/loop12
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop13:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"13"&lt;/span&gt;
    path: /dev/loop13
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop14:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"14"&lt;/span&gt;
    path: /dev/loop14
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop15:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"15"&lt;/span&gt;
    path: /dev/loop15
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  dev-loop16:
    major: &lt;span class="s2"&gt;"7"&lt;/span&gt;
    minor: &lt;span class="s2"&gt;"16"&lt;/span&gt;
    path: /dev/loop16
    &lt;span class="nb"&gt;type&lt;/span&gt;: unix-block
  eth0:
    name: eth0
    network: lxdbr0
    &lt;span class="nb"&gt;type&lt;/span&gt;: nic
  root:
    path: /
    pool: default
    &lt;span class="nb"&gt;type&lt;/span&gt;: disk
used_by: &lt;span class="o"&gt;[]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Suivi du lancement d’un contrôleur local dans LXD pour Juju :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@lxd:~# &lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; ~/.local/share/juju

root@lxd:~# juju bootstrap localhost lxd
Creating Juju controller &lt;span class="s2"&gt;"lxd"&lt;/span&gt; on localhost/localhost
Looking &lt;span class="k"&gt;for &lt;/span&gt;packaged Juju agent version 3.4.0 &lt;span class="k"&gt;for &lt;/span&gt;amd64
Located Juju agent version 3.4.0-ubuntu-amd64 at https://streams.canonical.com/juju/tools/agent/3.4.0/juju-3.4.0-linux-amd64.tgz
To configure your system to better support LXD containers, please see: https://documentation.ubuntu.com/lxd/en/latest/explanation/performance_tuning/
Launching controller instance&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt; on localhost/localhost...
 - juju-b6d031-0 &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;arch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;amd64&lt;span class="o"&gt;)&lt;/span&gt;                 
Installing Juju agent on bootstrap instance
Waiting &lt;span class="k"&gt;for &lt;/span&gt;address
Attempting to connect to 10.11.106.167:22
Connected to 10.11.106.167
Running machine configuration script...
Bootstrap agent now started
Contacting Juju controller at 10.11.106.167 to verify accessibility...

Bootstrap &lt;span class="nb"&gt;complete&lt;/span&gt;, controller &lt;span class="s2"&gt;"lxd"&lt;/span&gt; is now available
Controller machines are &lt;span class="k"&gt;in &lt;/span&gt;the &lt;span class="s2"&gt;"controller"&lt;/span&gt; model

Now you can run
        juju add-model &amp;lt;model-name&amp;gt;
to create a new model to deploy workloads.

root@lxd:~# juju controllers
Use &lt;span class="nt"&gt;--refresh&lt;/span&gt; option with this &lt;span class="nb"&gt;command &lt;/span&gt;to see the latest information.

Controller Model User Access Cloud/Region Models Nodes HA Version
lxd&lt;span class="k"&gt;*&lt;/span&gt; - admin superuser localhost/localhost 1 1 none 3.4.0  

root@lxd:~# lxc &lt;span class="nb"&gt;ls&lt;/span&gt;
+---------------+---------+----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------------+---------+----------------------+------+-----------+-----------+
| juju-b6d031-0 | RUNNING | 10.11.106.167 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | | CONTAINER | 0 |
+---------------+---------+----------------------+------+-----------+-----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Visualisation des Charms pour Canonical Kubernetes avant installation :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@lxd:~# juju info k8s
name: k8s
publisher: Canonical Kubernetes
summary: A machine charm &lt;span class="k"&gt;for &lt;/span&gt;K8s
description: &lt;span class="s2"&gt;"A machine charm which operates a complete Kubernetes cluster.&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;This
  charm installs and operates a Kubernetes cluster via the k8s snap. It exposes&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;relations
  to co-operate with other kubernetes components such as optional CNIs, &lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;optional
  cloud-providers, optional schedulers, external backing stores, and external&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;certificate
  storage.&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;This charm provides the following running components:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;* kube-apiserver&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;*
  kube-scheduler&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;* kube-controller-manager&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;* kube-proxy&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;* kubelet&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;* containerd&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;This
  charm can optionally disable the following components:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;* A Kubernetes Backing Store&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;*
  A Kubernetes CNI&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
store-url: https://charmhub.io/k8s
charm-id: 5GG4ydqob6yLrcZ3rrscqL0gVJvK53Ij
supports: ubuntu@20.04, ubuntu@22.04
subordinate: &lt;span class="nb"&gt;false
&lt;/span&gt;relations:
  provides:
    cos-agent: cos_agent
    cos-worker-tokens: cos-tokens
    k8s-cluster: k8s-cluster
  requires: &lt;span class="o"&gt;{}&lt;/span&gt;
channels: |
  latest/stable: –
  latest/candidate: –
  latest/beta: –
  latest/edge: 26 2024-03-29 &lt;span class="o"&gt;(&lt;/span&gt;26&lt;span class="o"&gt;)&lt;/span&gt; 3MB amd64 ubuntu@20.04, ubuntu@22.04

root@lxd:~# juju info k8s-worker
name: k8s-worker
publisher: Canonical Kubernetes
summary: A machine charm &lt;span class="k"&gt;for &lt;/span&gt;a K8s Worker
description: |
  A machine charm which operates a Kubernetes worker.

  This charm installs and operates a Kubernetes worker via the k8s snap. It exposes
  relations to co-operate with other kubernetes components

  This charm provides the following running components:
  &lt;span class="k"&gt;*&lt;/span&gt; kube-proxy
  &lt;span class="k"&gt;*&lt;/span&gt; kubelet
  &lt;span class="k"&gt;*&lt;/span&gt; containerd
store-url: https://charmhub.io/k8s-worker
charm-id: ZL6PvT8yleZOMih0oKRYPgQI4FntLB48
supports: ubuntu@20.04, ubuntu@22.04
subordinate: &lt;span class="nb"&gt;false
&lt;/span&gt;relations:
  provides:
    cos-agent: cos_agent
  requires:
    cluster: k8s-cluster
    cos-tokens: cos-tokens
channels: |
  latest/stable: –
  latest/candidate: –
  latest/beta: –
  latest/edge: 27 2024-03-29 &lt;span class="o"&gt;(&lt;/span&gt;27&lt;span class="o"&gt;)&lt;/span&gt; 6MB amd64 ubuntu@20.04, ubuntu@22.04
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;qui peuvent être installés avec Juju, avec ici le noeud contrôleur du futur cluster Kubernetes …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://documentation.ubuntu.com/canonical-kubernetes/main/howto/install/charm/"&gt;Install Canonical Kubernetes from a charm&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@lxd:~# juju add-model k8s
Added &lt;span class="s1"&gt;'k8s'&lt;/span&gt; model on localhost/localhost with credential &lt;span class="s1"&gt;'localhost'&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;user &lt;span class="s1"&gt;'admin'&lt;/span&gt;
root@lxd:~# juju deploy k8s &lt;span class="nt"&gt;--channel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;latest/edge
Deployed &lt;span class="s2"&gt;"k8s"&lt;/span&gt; from charm-hub charm &lt;span class="s2"&gt;"k8s"&lt;/span&gt;, revision 26 &lt;span class="k"&gt;in &lt;/span&gt;channel latest/edge on ubuntu@22.04/stable

root@lxd:~# juju status
Model Controller Cloud/Region Version SLA Timestamp
k8s lxd localhost/localhost 3.4.0 unsupported 12:37:00Z

App Version Status Scale Charm Channel Rev Exposed Message
k8s waiting 1 k8s latest/edge 26 no Cluster not yet ready

Unit Workload Agent Machine Public address Ports Message
k8s/0&lt;span class="k"&gt;*&lt;/span&gt; waiting idle 0 10.11.106.116 Cluster not yet ready

Machine State Address Inst &lt;span class="nb"&gt;id &lt;/span&gt;Base AZ Message
0 started 10.11.106.116 juju-85fd7d-0 ubuntu@22.04 Running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Déploiement de deux noeuds Workers :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@lxd:~# juju deploy k8s-worker &lt;span class="nt"&gt;--channel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;latest/edge &lt;span class="nt"&gt;-n&lt;/span&gt; 2
Deployed &lt;span class="s2"&gt;"k8s-worker"&lt;/span&gt; from charm-hub charm &lt;span class="s2"&gt;"k8s-worker"&lt;/span&gt;, revision 27 &lt;span class="k"&gt;in &lt;/span&gt;channel latest/edge on ubuntu@22.04/stable
root@lxd:~# juju integrate k8s k8s-worker:cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Le cluster Kubernetes est alors actif et opérationnel :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@lxd:~# juju status
Model Controller Cloud/Region Version SLA Timestamp
k8s lxd localhost/localhost 3.4.0 unsupported 12:39:53Z

App Version Status Scale Charm Channel Rev Exposed Message
k8s 1.29.3 active 1 k8s latest/edge 26 no Ready
k8s-worker 1.29.3 active 2 k8s-worker latest/edge 27 no Ready

Unit Workload Agent Machine Public address Ports Message
k8s-worker/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 1 10.11.106.20 Ready
k8s-worker/1 active idle 2 10.11.106.178 Ready
k8s/0&lt;span class="k"&gt;*&lt;/span&gt; active idle 0 10.11.106.116 6443/tcp Ready

Machine State Address Inst &lt;span class="nb"&gt;id &lt;/span&gt;Base AZ Message
0 started 10.11.106.116 juju-85fd7d-0 ubuntu@22.04 Running
1 started 10.11.106.20 juju-85fd7d-1 ubuntu@22.04 Running
2 started 10.11.106.178 juju-85fd7d-2 ubuntu@22.04 Running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--r0PqPswU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/535/1%2ARYXL6HDuXE32TYP7iijO2g.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--r0PqPswU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/535/1%2ARYXL6HDuXE32TYP7iijO2g.jpeg" alt="" width="535" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Connexion au noeud Contrôleur du cluster pour vérifier son statut :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@lxd:~# juju ssh k8s/0

The programs included with the Ubuntu system are free software&lt;span class="p"&gt;;&lt;/span&gt;
the exact distribution terms &lt;span class="k"&gt;for &lt;/span&gt;each program are described &lt;span class="k"&gt;in &lt;/span&gt;the
individual files &lt;span class="k"&gt;in&lt;/span&gt; /usr/share/doc/&lt;span class="k"&gt;*&lt;/span&gt;/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a &lt;span class="nb"&gt;command &lt;/span&gt;as administrator &lt;span class="o"&gt;(&lt;/span&gt;user &lt;span class="s2"&gt;"root"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;, use &lt;span class="s2"&gt;"sudo &amp;lt;command&amp;gt;"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
See &lt;span class="s2"&gt;"man sudo_root"&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;details.

ubuntu@juju-85fd7d-0:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;k8s status
status: ready
high-availability: no
datastore:
  voter-nodes:
    - 10.11.106.116:6400
  standby-nodes: none
  spare-nodes: none

network:
  enabled: &lt;span class="nb"&gt;true
&lt;/span&gt;dns:
  enabled: &lt;span class="nb"&gt;true
  &lt;/span&gt;cluster-domain: cluster.local
  service-ip: 10.152.183.28
  upstream-nameservers:
  - /etc/resolv.conf
metrics-server:
  enabled: &lt;span class="nb"&gt;true

&lt;/span&gt;ubuntu@juju-85fd7d-0:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;k8s kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/coredns:udp-53/proxy

To further debug and diagnose cluster problems, use &lt;span class="s1"&gt;'kubectl cluster-info dump'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
ubuntu@juju-85fd7d-0:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;k8s kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
juju-85fd7d-0 Ready control-plane,worker 7m10s v1.29.3 10.11.106.116 &amp;lt;none&amp;gt; Ubuntu 22.04.4 LTS 5.15.0-67-generic containerd://1.6.28
juju-85fd7d-1 Ready worker 3m23s v1.29.3 10.11.106.20 &amp;lt;none&amp;gt; Ubuntu 22.04.4 LTS 5.15.0-67-generic containerd://1.6.28
juju-85fd7d-2 Ready worker 3m10s v1.29.3 10.11.106.178 &amp;lt;none&amp;gt; Ubuntu 22.04.4 LTS 5.15.0-67-generic containerd://1.6.28
ubuntu@juju-85fd7d-0:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;k8s kubectl get po,svc &lt;span class="nt"&gt;-A&lt;/span&gt;
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/cilium-b5hkt 1/1 Running 0 7m13s
kube-system pod/cilium-gghfz 1/1 Running 0 3m17s
kube-system pod/cilium-kzwpz 1/1 Running 0 3m30s
kube-system pod/cilium-operator-597b55b886-jhqkq 1/1 Running 0 7m13s
kube-system pod/coredns-66579b5b88-x9dv2 1/1 Running 0 7m13s
kube-system pod/metrics-server-57db9dfb7b-lnllv 1/1 Running 0 7m13s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
default service/kubernetes ClusterIP 10.152.183.1 &amp;lt;none&amp;gt; 443/TCP 7m20s
kube-system service/coredns ClusterIP 10.152.183.28 &amp;lt;none&amp;gt; 53/UDP,53/TCP 7m21s
kube-system service/hubble-peer ClusterIP 10.152.183.184 &amp;lt;none&amp;gt; 443/TCP 7m21s
kube-system service/metrics-server ClusterIP 10.152.183.60 &amp;lt;none&amp;gt; 443/TCP 7m20s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Je peux récupérer le fichierkubeconfigavec Juju :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@lxd:~# juju ssh k8s/0 &lt;span class="s1"&gt;'sudo k8s config'&lt;/span&gt;
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: &lt;span class="nv"&gt;LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURHekNDQWdPZ0F3SUJBZ0lRQ2RCNysvUWI1Ums4MzdmRGJhWlp2ekFOQmdrcWhraUc5dzBCQVFzRkFEQVkKTVJZd0ZBWURWUVFERXcxcmRXSmxjbTVsZEdWekxXTmhNQjRYRFRJME1ETXpNVEV5TXpVeE1Gb1hEVFEwTURNegpNVEV5TXpVeE1Gb3dHREVXTUJRR0ExVUVBeE1OYTNWaVpYSnVaWFJsY3kxallUQ0NBU0l3RFFZSktvWklodmNOCkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFNK1VRNllEVS9vaEVrWjhGcHlnQ0Y3N1JHaWdEMUdEZUR5ekdCeTIKZjhleVhZMGRUWlFJcmw0amtIL29SS2tiRnB4QnZVSVlzWFV5RlpkOUxKM3VFMWhPQTNDNXYzM0VOUVhrdU5pNQpELy9kRHNyZ0pHWitzTnJKSTIzSWZOV1VXanNLc1hzejFzZXN5cHRYb2dYbytkTkxxRitVTjFnUjVBOTZ6VVB0ClZOZFJLKzUyaTFxcWcxeUFhczcybVhwVkNQZUxxSi9adXhDSFhTcE8rcWt0Q2pSeUVTKzJmVFVTaUhtRlQ5clMKVG9vZndTZ0RkWHJlMmsvN3NFSjg5aGk2NFpQeVQyZDFPYzcxbzEyQUlyZ0VBYlorZnJ0NnFCSVhENkkxSEJzKwphZlh1UmVmTyswUEFVYzgxN25QaGhTWW1OcGJrdk5vdFF4WlRZRGRONEhEbEJUVUNBd0VBQWFOaE1GOHdEZ1lEClZSMFBBUUgvQkFRREFnS0VNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01DQmdnckJnRUZCUWNEQVRBUEJnTlYKSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnFobUlNNzlRVnFYMks1YXVuVUdBMENYOTBkVEFOQmdrcQpoa2lHOXcwQkFRc0ZBQU9DQVFFQWNnaUJad0R3MTlVVlFaQVpHRCs5THFSOFBycXJyMlRqVFh0Q1RkbzFGWWZICnlla2IzdmV1YjJUblNOT2RUeEhjWmtlR2d3QXFYV216Z1MrSHl5TFZSTlY2WTNJK0ZKQmJpVkJJOTV0TlRmaG4KMEZvSEVzWUNSY3Q5L1JVQkVOTTFsZDY1bWFVNFc2TWFvdE5MOVpSczNGTDJ3SmRKTjdCVEJIamxYZzhrckkrOQpOdlcvTWZCS0FvcWRleXNBMHJHY3RlczVWZm1YUDRMa1hxNXhQNmV1MWhmSU1ObXM1UTk4QTdsck9TZG5BNzJYClFNKzBBc1ovckhZQ0hnRUFqNjdIK0drTFhQeEFzelZ4UTZpRHRJMmhyWVVKYmNFSTg1REF0RFNhc2xGS2JHLzMKNU53b1U0VTZjVXMxcHZiTjFsVFZvTGVFTHFoYU9EMkdiUnRYb0h4UnFBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
    server: https://10.11.106.116:6443
  name: k8s
contexts:
- context:
    cluster: k8s
    user: k8s-user
  name: k8s
current-context: k8s
kind: Config
preferences: &lt;span class="o"&gt;{}&lt;/span&gt;
&lt;span class="nb"&gt;users&lt;/span&gt;:
- name: k8s-user
  user:
    token: token::00669e4efab3674984256068943beee135d06d1a

Connection to 10.11.106.116 closed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pour bénéficier localement du client kubectl :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@lxd:~# juju expose k8s
root@lxd:~# snap &lt;span class="nb"&gt;install &lt;/span&gt;kubectl &lt;span class="nt"&gt;--classic&lt;/span&gt;
snap &lt;span class="s2"&gt;"kubectl"&lt;/span&gt; is already installed, see &lt;span class="s1"&gt;'snap help refresh'&lt;/span&gt;
root@lxd:~# &lt;span class="nb"&gt;mkdir&lt;/span&gt; .kube
root@lxd:~# juju ssh k8s/0 &lt;span class="s1"&gt;'sudo k8s config'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; .kube/config
Connection to 10.11.106.116 closed.
root@lxd:~# &lt;span class="nb"&gt;chmod &lt;/span&gt;400 .kube/config
root@lxd:~# kubectl cluster-info
Kubernetes control plane is running at https://10.11.106.116:6443
CoreDNS is running at https://10.11.106.116:6443/api/v1/namespaces/kube-system/services/coredns:udp-53/proxy

To further debug and diagnose cluster problems, use &lt;span class="s1"&gt;'kubectl cluster-info dump'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
root@lxd:~# kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
juju-85fd7d-0 Ready control-plane,worker 11m v1.29.3 10.11.106.116 &amp;lt;none&amp;gt; Ubuntu 22.04.4 LTS 5.15.0-67-generic containerd://1.6.28
juju-85fd7d-1 Ready worker 7m47s v1.29.3 10.11.106.20 &amp;lt;none&amp;gt; Ubuntu 22.04.4 LTS 5.15.0-67-generic containerd://1.6.28
juju-85fd7d-2 Ready worker 7m34s v1.29.3 10.11.106.178 &amp;lt;none&amp;gt; Ubuntu 22.04.4 LTS 5.15.0-67-generic containerd://1.6.28

root@lxd:~# lxc &lt;span class="nb"&gt;ls&lt;/span&gt;
+---------------+---------+--------------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------------+---------+--------------------------+------+-----------+-----------+
| juju-85fd7d-0 | RUNNING | 10.11.106.116 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | | CONTAINER | 0 |
| | | 10.1.0.231 &lt;span class="o"&gt;(&lt;/span&gt;cilium_host&lt;span class="o"&gt;)&lt;/span&gt; | | | |
+---------------+---------+--------------------------+------+-----------+-----------+
| juju-85fd7d-1 | RUNNING | 10.11.106.20 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | | CONTAINER | 0 |
| | | 10.1.1.16 &lt;span class="o"&gt;(&lt;/span&gt;cilium_host&lt;span class="o"&gt;)&lt;/span&gt; | | | |
+---------------+---------+--------------------------+------+-----------+-----------+
| juju-85fd7d-2 | RUNNING | 10.11.106.178 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | | CONTAINER | 0 |
| | | 10.1.2.20 &lt;span class="o"&gt;(&lt;/span&gt;cilium_host&lt;span class="o"&gt;)&lt;/span&gt; | | | |
+---------------+---------+--------------------------+------+-----------+-----------+
| juju-b6d031-0 | RUNNING | 10.11.106.167 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | | CONTAINER | 0 |
+---------------+---------+--------------------------+------+-----------+-----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ny-Ug0vl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/500/1%2AILSNou-WNpAWHzy3_8g9Wg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ny-Ug0vl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/500/1%2AILSNou-WNpAWHzy3_8g9Wg.jpeg" alt="" width="500" height="749"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Comme dans l’article précédent, je peux bénéficier d’Ollama et des LLM ouverts via Helm :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ollama.com/"&gt;Ollama&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/otwld/ollama-helm"&gt;GitHub - otwld/ollama-helm: Helm chart for Ollama on Kubernetes&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@lxd:~# curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Downloading https://get.helm.sh/helm-v3.14.3-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to &lt;span class="nb"&gt;install &lt;/span&gt;helm into /usr/local/bin
helm installed into /usr/local/bin/helm

root@lxd:~# helm &lt;span class="nb"&gt;ls
&lt;/span&gt;NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION

root@lxd:~# kubectl create namespace ollama
namespace/ollama created

root@lxd:~# helm &lt;span class="nb"&gt;install &lt;/span&gt;ollama ollama-helm/ollama &lt;span class="nt"&gt;--namespace&lt;/span&gt; ollama
NAME: ollama
LAST DEPLOYED:
NAMESPACE: ollama
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;POD_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;--namespace&lt;/span&gt; ollama &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s2"&gt;"app.kubernetes.io/name=ollama,app.kubernetes.io/instance=ollama"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.items[0].metadata.name}"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CONTAINER_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get pod &lt;span class="nt"&gt;--namespace&lt;/span&gt; ollama &lt;span class="nv"&gt;$POD_NAME&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.spec.containers[0].ports[0].containerPort}"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Visit http://127.0.0.1:8080 to use your application"&lt;/span&gt;
  kubectl &lt;span class="nt"&gt;--namespace&lt;/span&gt; ollama port-forward &lt;span class="nv"&gt;$POD_NAME&lt;/span&gt; 8080:&lt;span class="nv"&gt;$CONTAINER_PORT&lt;/span&gt;

root@lxd:~# helm &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt; ollama
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
ollama ollama 1 2024-03-31 13:11:43.199405708 +0000 UTC deployed ollama-0.21.1 0.1.30    

root@lxd:~# kubectl get po,svc &lt;span class="nt"&gt;-n&lt;/span&gt; ollama
NAME READY STATUS RESTARTS AGE
pod/ollama-75c7b79658-rnf7h 1/1 Running 0 94s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
service/ollama ClusterIP 10.152.183.78 &amp;lt;none&amp;gt; 11434/TCP 94s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Et je peux exposer localement l’API RESTful d’Ollama :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@lxd:~# kubectl port-forward service/ollama &lt;span class="nt"&gt;-n&lt;/span&gt; ollama 11434:11434 &lt;span class="nt"&gt;--address&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'0.0.0.0'&lt;/span&gt;
Forwarding from 0.0.0.0:11434 -&amp;gt; 11434

root@lxd:~# netstat &lt;span class="nt"&gt;-tunlp&lt;/span&gt;
Active Internet connections &lt;span class="o"&gt;(&lt;/span&gt;only servers&lt;span class="o"&gt;)&lt;/span&gt;
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name    
tcp 0 0 0.0.0.0:22 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 1399/sshd: /usr/sbi 
tcp 0 0 10.11.106.1:53 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 9929/dnsmasq        
tcp 0 0 0.0.0.0:11434 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 52634/kubectl       
tcp 0 0 127.0.0.53:53 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 623/systemd-resolve 
tcp6 0 0 :::8443 :::&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 9784/lxd            
tcp6 0 0 :::22 :::&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 1399/sshd: /usr/sbi 
udp 0 0 10.11.106.1:53 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; 9929/dnsmasq        
udp 0 0 127.0.0.53:53 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; 623/systemd-resolve 
udp 0 0 0.0.0.0:67 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; 9929/dnsmasq    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Récupération du client Ollama via GitHub :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yBcCavzb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2A9roIeJ1BGnfVeXfC" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yBcCavzb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2A9roIeJ1BGnfVeXfC" alt="" width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ollama/ollama/releases"&gt;Releases · ollama/ollama&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@lxd:~# wget &lt;span class="nt"&gt;-c&lt;/span&gt; https://github.com/ollama/ollama/releases/download/v0.1.30/ollama-linux-amd64 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chmod&lt;/span&gt; +x ollama-linux-amd64 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;mv &lt;/span&gt;ollama-linux-amd64 /usr/local/bin/ollama
Resolving objects.githubusercontent.com &lt;span class="o"&gt;(&lt;/span&gt;objects.githubusercontent.com&lt;span class="o"&gt;)&lt;/span&gt;... 185.199.111.133, 185.199.108.133, 185.199.109.133, ...
Connecting to objects.githubusercontent.com &lt;span class="o"&gt;(&lt;/span&gt;objects.githubusercontent.com&lt;span class="o"&gt;)&lt;/span&gt;|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 297108760 &lt;span class="o"&gt;(&lt;/span&gt;283M&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;application/octet-stream]
Saving to: ‘ollama-linux-amd64’

ollama-linux-amd64 100%[&lt;span class="o"&gt;=====================================================================================================&amp;gt;]&lt;/span&gt; 283.34M 351MB/s &lt;span class="k"&gt;in &lt;/span&gt;0.8s    

&lt;span class="o"&gt;(&lt;/span&gt;351 MB/s&lt;span class="o"&gt;)&lt;/span&gt; - ‘ollama-linux-amd64’ saved &lt;span class="o"&gt;[&lt;/span&gt;297108760/297108760]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pour exécuter par exemple ici Code Llama, une famille de LLM pour le code basée sur Llama 2 :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://huggingface.co/docs/transformers/model_doc/code_llama"&gt;CodeLlama&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/meta-llama/codellama"&gt;GitHub - meta-llama/codellama: Inference code for CodeLlama models&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@lxd:~# ollama run codellama
pulling manifest 
pulling 3a43f93b78ec... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 3.8 GB                         
pulling 8c17c2ebb0ea... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 7.0 KB                         
pulling 590d74a5569b... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.8 KB                         
pulling 2e0493f67d0c... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 59 B                         
pulling 7f6a57943a88... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 120 B                         
pulling 316526ac7323... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 529 B                         
verifying sha256 digest 
writing manifest 
removing any unused layers 
success 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Avec ces petits exemples à parfaire …&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; How deploy manifest with Juju inside Kubernetes cluster ?

To deploy a manifest file with Juju inside a Kubernetes cluster, you can follow these steps:

1. Install Juju on your &lt;span class="nb"&gt;local &lt;/span&gt;machine and connect it to the Kubernetes cluster that you want to deploy the manifest file to.
2. Create a Juju model &lt;span class="k"&gt;for &lt;/span&gt;the Kubernetes cluster by running the following &lt;span class="nb"&gt;command&lt;/span&gt;:

juju add-model &amp;lt;cluster-name&amp;gt;

Replace &lt;span class="sb"&gt;`&lt;/span&gt;&amp;lt;cluster-name&amp;gt;&lt;span class="sb"&gt;`&lt;/span&gt; with the name of your Kubernetes cluster.
3. Set the deployment environment to the Kubernetes cluster that you want to deploy the manifest file to by running the following &lt;span class="nb"&gt;command&lt;/span&gt;:

juju set-environment &amp;lt;cluster-name&amp;gt;

4. Create a new Juju application using the &lt;span class="sb"&gt;`&lt;/span&gt;juju add&lt;span class="sb"&gt;`&lt;/span&gt; &lt;span class="nb"&gt;command&lt;/span&gt;, passing &lt;span class="k"&gt;in &lt;/span&gt;the name of the application and the path to the manifest file as arguments. For example:

juju add my-application manifests/my-application.yaml

Replace &lt;span class="sb"&gt;`&lt;/span&gt;&amp;lt;my-application&amp;gt;&lt;span class="sb"&gt;`&lt;/span&gt; with the name of your application and &lt;span class="sb"&gt;`&lt;/span&gt;&amp;lt;manifests/my-application.yaml&amp;gt;&lt;span class="sb"&gt;`&lt;/span&gt; with the path to your manifest file.
5. Deploy the application by running the following &lt;span class="nb"&gt;command&lt;/span&gt;:

juju deploy my-application

This will deploy the application using the configuration specified &lt;span class="k"&gt;in &lt;/span&gt;the manifest file.
6. Once the deployment is &lt;span class="nb"&gt;complete&lt;/span&gt;, you can manage the application using standard Juju commands, such as &lt;span class="sb"&gt;`&lt;/span&gt;juju status&lt;span class="sb"&gt;`&lt;/span&gt; to view the status of the application and &lt;span class="sb"&gt;`&lt;/span&gt;juju logs&lt;span class="sb"&gt;`&lt;/span&gt; to view the
logs &lt;span class="k"&gt;for &lt;/span&gt;the application.
7. To remove the application from the Kubernetes cluster, you can use the &lt;span class="sb"&gt;`&lt;/span&gt;juju destroy&lt;span class="sb"&gt;`&lt;/span&gt; &lt;span class="nb"&gt;command&lt;/span&gt;:

juju destroy my-application

This will destroy the application and all its resources &lt;span class="k"&gt;in &lt;/span&gt;the Kubernetes cluster.

Note that this is a basic example of how to deploy a manifest file with Juju inside a Kubernetes cluster. There are many other options and features available &lt;span class="k"&gt;in &lt;/span&gt;Juju, so you can 
explore those as well.

&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; How deploy a Pod with Rust inside Kubernetes ?

To deploy a Pod with Rust inside a Kubernetes cluster, you can follow these steps:

1. Install the &lt;span class="sb"&gt;`&lt;/span&gt;cargo&lt;span class="sb"&gt;`&lt;/span&gt; package on your &lt;span class="nb"&gt;local &lt;/span&gt;machine by running the following &lt;span class="nb"&gt;command&lt;/span&gt;:

&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;cargo

2. Create a new Rust project using the &lt;span class="sb"&gt;`&lt;/span&gt;cargo new&lt;span class="sb"&gt;`&lt;/span&gt; &lt;span class="nb"&gt;command&lt;/span&gt;, passing &lt;span class="k"&gt;in &lt;/span&gt;the name of your project as an argument. For example:

cargo new my-rust-project

3. Change into the directory of your new project by running the following &lt;span class="nb"&gt;command&lt;/span&gt;:

&lt;span class="nb"&gt;cd &lt;/span&gt;my-rust-project

4. Add the &lt;span class="sb"&gt;`&lt;/span&gt;kubernetes&lt;span class="sb"&gt;`&lt;/span&gt; crate to your projects dependencies by adding the following line to your &lt;span class="sb"&gt;`&lt;/span&gt;Cargo.toml&lt;span class="sb"&gt;`&lt;/span&gt; file:

&lt;span class="o"&gt;[&lt;/span&gt;dependencies]
kubernetes &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.18"&lt;/span&gt;

5. Create a new Rust file &lt;span class="k"&gt;for &lt;/span&gt;your Kubernetes deployment, such as &lt;span class="sb"&gt;`&lt;/span&gt;deployment.rs&lt;span class="sb"&gt;`&lt;/span&gt;, and add the following code to it:

use kubernetes::api::Api&lt;span class="p"&gt;;&lt;/span&gt;
use kubernetes::config::Configuration&lt;span class="p"&gt;;&lt;/span&gt;
use kubernetes::apis::CoreApi&lt;span class="p"&gt;;&lt;/span&gt;

fn main&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;let &lt;/span&gt;config &lt;span class="o"&gt;=&lt;/span&gt; Configuration::new&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"https://&amp;lt;your-kubernetes-cluster&amp;gt;"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nb"&gt;let &lt;/span&gt;client &lt;span class="o"&gt;=&lt;/span&gt; Api::all&lt;span class="o"&gt;(&lt;/span&gt;config&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nb"&gt;let &lt;/span&gt;pod &lt;span class="o"&gt;=&lt;/span&gt; client.create_namespaced_pod&lt;span class="o"&gt;(&lt;/span&gt;
        &lt;span class="s2"&gt;"default"&lt;/span&gt;,
        &amp;amp;Api::Pod&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"my-rust-deployment"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;,
        None,
    &lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

Replace &lt;span class="sb"&gt;`&lt;/span&gt;&amp;lt;your-kubernetes-cluster&amp;gt;&lt;span class="sb"&gt;`&lt;/span&gt; with the URL of your Kubernetes cluster.
6. Compile your Rust code by running the following &lt;span class="nb"&gt;command&lt;/span&gt;:

cargo build &lt;span class="nt"&gt;--release&lt;/span&gt;
7. Run your Rust code by running the following &lt;span class="nb"&gt;command&lt;/span&gt;:

./target/release/deployment

This will deploy a new Pod with Rust inside the &lt;span class="sb"&gt;`&lt;/span&gt;default&lt;span class="sb"&gt;`&lt;/span&gt; namespace of your Kubernetes cluster. You can view the status of your Pod using the &lt;span class="sb"&gt;`&lt;/span&gt;kubectl get pods&lt;span class="sb"&gt;`&lt;/span&gt; command.

Note that this is a basic example of how to deploy a Pod with Rust inside a Kubernetes cluster. There are many other options and features available &lt;span class="k"&gt;in &lt;/span&gt;Rust, so you can explore those as
well

&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; Send a message &lt;span class="o"&gt;(&lt;/span&gt;/? &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;help&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Canonical Kubernetes peut également être installé à l’intérieur d’un conteneur LXD en utilisant le profil énoncé çi-dessus comme le précise cette documentation détaillée …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://documentation.ubuntu.com/canonical-kubernetes/main/howto/install/lxd/"&gt;Install Canonical Kubernetes in LXD&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sans compter qu’Ollama et une interface graphique conviviale avec Open WebUI pourraient être installé via Snap ou Helm :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ubuntu.com//blog/deploying-open-language-models-on-ubuntu"&gt;Deploying Open Language Models on Ubuntu | Ubuntu&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.openwebui.com/getting-started/installation/"&gt;Alternative Installation | Open WebUI&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Zjpe26-D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/605/1%2AfHy6aM2InvWnyH3JwlZISA.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Zjpe26-D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/605/1%2AfHy6aM2InvWnyH3JwlZISA.jpeg" alt="" width="605" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;À suivre !&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>containers</category>
      <category>llm</category>
      <category>ai</category>
    </item>
    <item>
      <title>k8s-snap (Canonical Kubernetes) pour un déploiement simple et rapide d’un cluster k8s …</title>
      <dc:creator>Karim</dc:creator>
      <pubDate>Sun, 25 Feb 2024 21:45:59 +0000</pubDate>
      <link>https://forem.com/deep75/k8s-snap-canonical-kubernetes-pour-un-deploiement-simple-et-rapide-dun-cluster-k8s--4hi5</link>
      <guid>https://forem.com/deep75/k8s-snap-canonical-kubernetes-pour-un-deploiement-simple-et-rapide-dun-cluster-k8s--4hi5</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AoD8pTT1WTzCB0kQzeTuR_w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AoD8pTT1WTzCB0kQzeTuR_w.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Focus sur Canonical Kubernetes, une distribution Kubernetes minimaliste (à l’image de k0s/k3s) livrée en quelques clics via Snap. L’accent est notamment mis sur des opérations simplifiées et une posture de sécurité renforcée sur n’importe quelle infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2A7ee6sReX_IYzffBr" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2A7ee6sReX_IYzffBr"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/canonical/k8s-snap" rel="noopener noreferrer"&gt;GitHub - canonical/k8s-snap: Canonical Kubernetes is an opinionated and CNCF conformant Kubernetes operated by Snaps and Charms, which come together to bring simplified operations and an enhanced security posture on any infrastructure.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://snapcraft.io/k8s" rel="noopener noreferrer"&gt;Install Canonical Kubernetes on Linux | Snap Store&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AfzQa2CSFE0X3AXHO.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AfzQa2CSFE0X3AXHO.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F500%2F0%2AM_kQ2Y-s8x1wcvml.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F500%2F0%2AM_kQ2Y-s8x1wcvml.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pour cela, déploiement sur une instance Ubuntu 22.04 LTS dans OVH qui autorise la virtualisation imbriquée :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AbDjr5fJfPwnxaLoo4GouMA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AbDjr5fJfPwnxaLoo4GouMA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Je commence par y installer Multipass pour lancer un groupe de trois machines virtuelles :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://multipass.run/" rel="noopener noreferrer"&gt;Multipass orchestrates virtual Ubuntu instances&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A0zMjJHQdIbZz-nYuroJutg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A0zMjJHQdIbZz-nYuroJutg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dès lors lancement des trois VMs Ubuntu 22.04 LTS pour initier l’ossature du futur cluster k8s :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AvHYLpHsjeUyDllEXqyZZ6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AvHYLpHsjeUyDllEXqyZZ6g.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dans la première machine virtuelle qui va supporter la plan de contrôle, installation de k8s-snap rapidement via Snap :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AixipmASq82vKfCIIL60Ugw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AixipmASq82vKfCIIL60Ugw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Le cluster k8s est initialisé et prêt pour rattacher un noeud Worker :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AcIyiP_B1WrqnxvtQdCv-dw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AcIyiP_B1WrqnxvtQdCv-dw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pour cela génération d’un token :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Ax3TTSQqk3tSa91CSDQGR1g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Ax3TTSQqk3tSa91CSDQGR1g.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dès lors, il est possible de rattacher le nouveau noeud Worker au cluster via ce token généré précedemment :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A1RU96f0nODsKUVB1j8r-Pw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A1RU96f0nODsKUVB1j8r-Pw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Rattachement d’un autre noeud Worker avec ce même token :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AbGt2I6O53u_DRct2yurbVQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AbGt2I6O53u_DRct2yurbVQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Les noeuds Worker sont alors opérationnels dans le cluster :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AZEy77_KktatLVUC1V0MpnQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AZEy77_KktatLVUC1V0MpnQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Il est possible de récupérer le fichier kubeconfig pour pouvoir utiliser le client kubectl :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AtkgG2ODW02nnnpGCq61reg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AtkgG2ODW02nnnpGCq61reg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AGJd2BdQuYLgIXrS7r8XsSg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AGJd2BdQuYLgIXrS7r8XsSg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lancement d’un Pod Ubuntu très rapidement :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Akj5r3sB_gulqF5XY_lvSkg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Akj5r3sB_gulqF5XY_lvSkg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;qui va servir ici au lancement d’Ollama :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ollama/ollama" rel="noopener noreferrer"&gt;GitHub - ollama/ollama: Get up and running with Llama 2, Mistral, Gemma, and other large language models.&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ollama est une plateforme qui permet aux utilisateurs d’exécuter de grands modèles de langages (LLM), tels que Mistral, Gemma, Llama 2 de Research Lab et d’autres … , directement sur leurs ordinateurs personnels plutôt que de dépendre uniquement de services en nuage. Il permet de télécharger des modèles pré-entraînés compatibles avec différents systèmes d’exploitation, notamment macOS, Linux et des versions préliminaires pour Windows. Les utilisateurs peuvent également personnaliser et développer leurs propres applications en utilisant ces modèles.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AJG7emGIeQDukzD7uGLdx_A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AJG7emGIeQDukzD7uGLdx_A.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A44_Mm1XHDdTa8q6J1-gmgQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A44_Mm1XHDdTa8q6J1-gmgQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ici sans l’utilisation d’une carte GPU et en mode CPU uniquement via de petits LLMs comme le très récent Gemma :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.lemondeinformatique.fr/actualites/lire-google-devoile-gemma-et-ses-2-modeles-ia-ouverts-93031.html" rel="noopener noreferrer"&gt;Google dévoile Gemma et ses 2 modèles IA ouverts - Le Monde Informatique&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.google/technology/developers/gemma-open-models/" rel="noopener noreferrer"&gt;Gemma: Introducing new state-of-the-art open models&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ollama.com/library/gemma" rel="noopener noreferrer"&gt;gemma&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ARucrkioq8Ct7-fxX3qVB5Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ARucrkioq8Ct7-fxX3qVB5Q.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;avec une petite interrogation :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AGHJ4Yaoc0goI96j5IUSmOA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AGHJ4Yaoc0goI96j5IUSmOA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AtRrNA4dtBlQLg0kmS8NpgA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AtRrNA4dtBlQLg0kmS8NpgA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;La même chose avec un autre LLM :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ollama.com/library/qwen" rel="noopener noreferrer"&gt;qwen&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://huggingface.co/Qwen" rel="noopener noreferrer"&gt;Qwen (Qwen)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AgVl9ha5nsSFkq9gaPs_12A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AgVl9ha5nsSFkq9gaPs_12A.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Aca5b-kY_okmQAAZYWn8hjg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Aca5b-kY_okmQAAZYWn8hjg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;avec une faible consommation de ressources …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ALGRnRRnYeYFmQw50Oe-eEg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ALGRnRRnYeYFmQw50Oe-eEg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Canonical Kubernetes est donc un moyen très rapide et très simple de déployer un cluster Kubernetes entièrement conforme.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Cette distribution ajoute des pièces manquantes (par exemple, Ingress, DNS, Réseau) pour une expérience “Zéro-op”. Pour les clusters à un seul nœud, elle peut être déployée avec deux commandes. Et Ajouter de nouveaux nœuds avec seulement deux autres commandes …&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F642%2F0%2A1Wfz2CBsHS_aIaVn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F642%2F0%2A1Wfz2CBsHS_aIaVn.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;À suivre !&lt;/p&gt;

</description>
      <category>llm</category>
      <category>kubernetes</category>
      <category>ubuntu</category>
      <category>ai</category>
    </item>
    <item>
      <title>Déployer un cluster Nomad et Consul très rapidement avec hashi-up …</title>
      <dc:creator>Karim</dc:creator>
      <pubDate>Sat, 11 Mar 2023 17:16:05 +0000</pubDate>
      <link>https://forem.com/deep75/deployer-un-cluster-nomad-et-consul-tres-rapidement-avec-hashi-up--3bn6</link>
      <guid>https://forem.com/deep75/deployer-un-cluster-nomad-et-consul-tres-rapidement-avec-hashi-up--3bn6</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gL4qVF2u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/855/0%2AqF6HFpBVho5EJxbq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gL4qVF2u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/855/0%2AqF6HFpBVho5EJxbq.png" alt="" width="855" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hashi-up est un utilitaire léger qui permet d’installer HashiCorp Consul, Nomad ou Vault sur n’importe quel hôte Linux distant.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/deep75/load-balancing-avec-haproxy-nomad-et-consul--1ga4"&gt;Load Balancing avec HAProxy, Nomad et Consul …&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Tout ce dont vous avez besoin est un accès ssh et le binaire hashi-up pour construire son cluster.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://github.com/jsiebens/hashi-up"&gt;GitHub - jsiebens/hashi-up: bootstrap HashiCorp Consul, Nomad, or Vault over SSH &amp;lt; 1 minute&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;L’outil est écrit en Go et est compilé pour Linux, Windows, MacOS et même pour Raspberry Pi.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://johansiebens.dev/posts/2020/08/building-a-nomad-cluster-on-raspberry-pi-running-ubuntu-server/"&gt;Building a Nomad cluster on Raspberry Pi running Ubuntu server&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ce projet est fortement inspiré du travail d’Alex Ellis qui a créé k3sup, un outil pour passer de zéro à KUBECONFIG avec k3s.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://deep75.medium.com/openstack-sur-lxd-avec-juju-et-k3sup-dans-phoenixnap-e5867a487497"&gt;OpenStack sur LXD avec Juju et k3sup dans phoenixNAP …&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/deep75/acceder-a-lensemble-du-cluster-kubernetes-avec-tailscale--2eci"&gt;Accéder à l’ensemble du cluster Kubernetes avec Tailscale …&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/alexellis/k3sup"&gt;GitHub - alexellis/k3sup: bootstrap K3s over SSH in &amp;lt; 60s 🚀&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Implémentation avec le lancement d’un serveur Ubuntu 22.04 LTS autorisant la virtualisation imbriquée dans DigitalOcean :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;curl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;-X&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;POST&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;-H&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Content-Type:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;application/json'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;-H&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Authorization:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Bearer&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'$TOKEN''&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;-d&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"minione"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"size"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"m-4vcpu-32gb"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"region"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"fra1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"image"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"ubuntu-22-04-x64"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"monitoring"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"vpc_uuid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"8b8c0544-e7b6-4d0a-977d-4406ea518f7a"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"https://api.digitalocean.com/v2/droplets"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/blog/introducing-premium-cpu-optimized-droplets"&gt;Introducing Premium CPU-Optimized Droplets | DigitalOcean&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/"&gt;DigitalOcean | The Cloud for Builders&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LoGwzHbU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AvSkuuqMmE14Wi4Z3lAZOyg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LoGwzHbU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AvSkuuqMmE14Wi4Z3lAZOyg.png" alt="" width="880" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Je vais y installer OpenNebula via le script miniONE :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qmXaPFMm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A-5ao6A9ExiYDx4h_huzocA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qmXaPFMm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A-5ao6A9ExiYDx4h_huzocA.png" alt="" width="880" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://opennebula.io/"&gt;OpenNebula - Open Source Cloud &amp;amp; Edge Computing Platform&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/OpenNebula/minione"&gt;GitHub - OpenNebula/minione: Easy to use deployment tool for an OpenNebula evaluation environment&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/deep75/opennebula-6-0-mutara-imbrique-dans-lxd-application-au-deploiement-d-un-cluster-microk8s-1-21-453m"&gt;OpenNebula 6.0 “Mutara” imbriqué dans LXD : application au déploiement d’un cluster MicroK8s 1.21 …&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@minione:~# wget &lt;span class="nt"&gt;-c&lt;/span&gt; https://raw.githubusercontent.com/OpenNebula/minione/master/minione
&lt;span class="nt"&gt;--2023-03-11&lt;/span&gt; 14:21:46-- https://raw.githubusercontent.com/OpenNebula/minione/master/minione
Resolving raw.githubusercontent.com &lt;span class="o"&gt;(&lt;/span&gt;raw.githubusercontent.com&lt;span class="o"&gt;)&lt;/span&gt;... 185.199.110.133, 185.199.109.133, 185.199.111.133, ...
Connecting to raw.githubusercontent.com &lt;span class="o"&gt;(&lt;/span&gt;raw.githubusercontent.com&lt;span class="o"&gt;)&lt;/span&gt;|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 51458 &lt;span class="o"&gt;(&lt;/span&gt;50K&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;text/plain]
Saving to: ‘minione’

minione 100%[&lt;span class="o"&gt;=============================================================================================&amp;gt;]&lt;/span&gt; 50.25K &lt;span class="nt"&gt;--&lt;/span&gt;.-KB/s &lt;span class="k"&gt;in &lt;/span&gt;0s      

2023-03-11 14:21:46 &lt;span class="o"&gt;(&lt;/span&gt;149 MB/s&lt;span class="o"&gt;)&lt;/span&gt; - ‘minione’ saved &lt;span class="o"&gt;[&lt;/span&gt;51458/51458]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Déploiement automatisé très rapidement avec lancement du script :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@minione:~# bash minione

&lt;span class="c"&gt;### Checks &amp;amp; detection&lt;/span&gt;
Checking augeas is installed SKIP will try to &lt;span class="nb"&gt;install
&lt;/span&gt;Checking AppArmor SKIP will try to modify
Checking &lt;span class="k"&gt;for &lt;/span&gt;present ssh key SKIP
Checking &lt;span class="o"&gt;(&lt;/span&gt;iptables|netfilter&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nt"&gt;-persistent&lt;/span&gt; are installed SKIP will try to &lt;span class="nb"&gt;install
&lt;/span&gt;Checking docker is installed SKIP will try to &lt;span class="nb"&gt;install
&lt;/span&gt;Checking python3-pip is installed SKIP will try to &lt;span class="nb"&gt;install
&lt;/span&gt;Checking ansible SKIP will try to &lt;span class="nb"&gt;install
&lt;/span&gt;Checking terraform SKIP will try to &lt;span class="nb"&gt;install
&lt;/span&gt;Checking unzip is installed SKIP will try to &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="c"&gt;### Main deployment steps:&lt;/span&gt;
Install OpenNebula frontend version 6.6
Install Terraform
Install Docker
Configure bridge minionebr with IP 172.16.100.1/24
Enable NAT over eth0
Modify AppArmor
Install OpenNebula KVM node
Export appliance and update VM template
Install augeas-tools iptables-persistent netfilter-persistent python3-pip unzip
Install pip &lt;span class="s1"&gt;'ansible==2.9.9'&lt;/span&gt;

Do you agree? &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no]:
&lt;span class="nb"&gt;yes&lt;/span&gt;

&lt;span class="c"&gt;### Installation&lt;/span&gt;
Updating APT cache OK
Install augeas-tools iptables-persistent netfilter-persistent python3-pip unzip OK
Updating PIP OK
Install from PyPI &lt;span class="s1"&gt;'ansible==2.9.9'&lt;/span&gt; OK
Creating bridge interface minionebr OK
Bring bridge interfaces up OK
Enabling IPv4 forward OK
Persisting IPv4 forward OK
Configuring NAT using iptables OK
Saving iptables changes OK
Installing DNSMasq OK
Starting DNSMasq OK
Configuring repositories OK
Updating APT cache OK
Installing OpenNebula packages OK
Installing opennebula-provision package OK
Installing TerraForm OK
Create docker packages repository OK
Install docker OK
Start docker service OK
Enable docker service OK
Installing OpenNebula kvm node packages OK
Updating AppArmor OK
Disable default libvirtd networking OK
Restart libvirtd OK

&lt;span class="c"&gt;### Configuration&lt;/span&gt;
Generating ssh keypair &lt;span class="k"&gt;in&lt;/span&gt; /root/.ssh-oneprovision/id_rsa OK
Add oneadmin to docker group OK
Update network hooks OK
Switching OneGate endpoint &lt;span class="k"&gt;in &lt;/span&gt;oned.conf OK
Switching OneGate endpoint &lt;span class="k"&gt;in &lt;/span&gt;onegate-server.conf OK
Switching keep_empty_bridge on &lt;span class="k"&gt;in &lt;/span&gt;OpenNebulaNetwork.conf OK
Switching scheduler interval &lt;span class="k"&gt;in &lt;/span&gt;oned.conf OK
Setting initial password &lt;span class="k"&gt;for &lt;/span&gt;current user and oneadmin OK
Changing WebUI to listen on port 80 OK
Switching FireEdge public endpoint OK
Starting OpenNebula services OK
Enabling OpenNebula services OK
Add ssh key to oneadmin user OK
Update ssh configs to allow VM addresses reusing OK
Ensure own &lt;span class="nb"&gt;hostname &lt;/span&gt;is resolvable OK
Checking OpenNebula is working OK
Disabling ssh from virtual network OK
Adding localhost ssh key to known_hosts OK
Testing ssh connection to localhost OK
Updating datastores template OK
Creating KVM host OK
Restarting OpenNebula OK
Creating virtual network OK
Exporting &lt;span class="o"&gt;[&lt;/span&gt;Alpine Linux 3.17] from Marketplace to &lt;span class="nb"&gt;local &lt;/span&gt;datastore OK
Waiting &lt;span class="k"&gt;until &lt;/span&gt;the image is ready OK
Updating VM template OK

&lt;span class="c"&gt;### Report&lt;/span&gt;
OpenNebula 6.6 was installed
Sunstone is running on:
  http://164.90.215.231/
FireEdge is running on:
  http://164.90.215.231:2616/
Use following to login:
  user: oneadmin
  password: SyEnJiYuWC
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--59KqDYm_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AieCHtpYOQ1j0HlnLPlZVKw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--59KqDYm_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AieCHtpYOQ1j0HlnLPlZVKw.png" alt="" width="880" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;chargement d’une image Ubuntu 22.04 LTS pour création de plusieures machines virtuelles :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--krpplbcI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ADuJNpDTqzPe8ttSfyFPDGA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--krpplbcI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ADuJNpDTqzPe8ttSfyFPDGA.png" alt="" width="880" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---0ULMxqE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ABqIC-QChw2U4Q2S_fLhqTg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---0ULMxqE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ABqIC-QChw2U4Q2S_fLhqTg.png" alt="" width="880" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ti442phG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AL1XW6Qyqj7DPt3yaTzc2GA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ti442phG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AL1XW6Qyqj7DPt3yaTzc2GA.png" alt="" width="880" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lancement d’une première machine virtuelle sous Ubuntu :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0wKuEZfy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A_DIouHvTZI60F2pNxtd05g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0wKuEZfy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A_DIouHvTZI60F2pNxtd05g.png" alt="" width="880" height="1608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eMgyh13o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AqsoCPa-ts5RFYcb6yPG35A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eMgyh13o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AqsoCPa-ts5RFYcb6yPG35A.png" alt="" width="880" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;et installation de hashi-up :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@minione:~# curl &lt;span class="nt"&gt;-sLS&lt;/span&gt; https://get.hashi-up.dev | sh

Downloading package https://github.com/jsiebens/hashi-up/releases/download/v0.16.0/hashi-up as /tmp/hashi-up
Download complete.

Running with sufficient permissions to attempt to move hashi-up to /usr/local/bin
New version of hashi-up installed to /usr/local/bin
Version: 0.16.0
Git Commit: b062f5d

root@minione:~# hashi-up version
Version: 0.16.0
Git Commit: b062f5d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;pour la mise en place d’un serveur Consul :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z30xH9M0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/500/0%2A-lsWslDmXvzY6WOo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z30xH9M0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/500/0%2A-lsWslDmXvzY6WOo.jpg" alt="" width="500" height="667"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/jsiebens/hashi-up/blob/main/docs/consul.md"&gt;hashi-up/consul.md at main · jsiebens/hashi-up&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://developer.hashicorp.com/consul/downloads"&gt;Install | Consul | HashiCorp Developer&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@minione:~# &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;172.16.100.2

root@minione:~# hashi-up consul &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-addr&lt;/span&gt; &lt;span class="nv"&gt;$IP&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-user&lt;/span&gt; root &lt;span class="nt"&gt;--server&lt;/span&gt; &lt;span class="nt"&gt;--client-addr&lt;/span&gt; 0.0.0.0 &lt;span class="nt"&gt;--ssh-target-key&lt;/span&gt; .ssh/id_rsa &lt;span class="nt"&gt;--version&lt;/span&gt; 1.15.1

&lt;span class="o"&gt;[&lt;/span&gt;INFO] Uploading generated Consul configuration ...
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Installing Consul ...
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; User &lt;span class="s1"&gt;'consul'&lt;/span&gt; already exists, will not create again
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Copying configuration files
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Downloading consul_1.15.1_linux_amd64.zip
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Downloading consul_1.15.1_SHA256SUMS
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Verifying downloaded consul_1.15.1_linux_amd64.zip
consul_1.15.1_linux_amd64.zip: OK
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Unpacking consul_1.15.1_linux_amd64.zip
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Adding systemd service file /etc/systemd/system/consul.service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Enabling systemd service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Starting systemd service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Done.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;J’utilise “le VPN du pauvre” avec sshuttle pour me connecter au dashboard du serveur Consul :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://sshuttle.readthedocs.io/en/stable/usage.html"&gt;Usage - sshuttle 1.1.1 documentation&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;sshuttle &lt;span class="nt"&gt;--dns&lt;/span&gt; &lt;span class="nt"&gt;-NHr&lt;/span&gt; root@164.90.215.231 0.0.0.0/0  ✔  base   15:45:09  
c : Connected to server.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ce qui peut me permettre d’atteindre le serveur Consul directement …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WJqNXqyB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ardoxv5-AKu3HFKN4ZEHDnA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WJqNXqyB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ardoxv5-AKu3HFKN4ZEHDnA.png" alt="" width="880" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--87qcRIic--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A7v2kScc-Jm6Fc1mQmhCvZw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--87qcRIic--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A7v2kScc-Jm6Fc1mQmhCvZw.png" alt="" width="880" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KcjVEwMA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AAl7dC8S12877LYLHLYzLrA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KcjVEwMA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AAl7dC8S12877LYLHLYzLrA.png" alt="" width="880" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Je peux y adjoindre un serveur Nomad avec hashi-up :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.hashicorp.com/nomad/downloads"&gt;Install | Nomad | HashiCorp Developer&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@minione:~# hashi-up nomad &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-addr&lt;/span&gt; &lt;span class="nv"&gt;$IP&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-user&lt;/span&gt; root &lt;span class="nt"&gt;--server&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-key&lt;/span&gt; .ssh/id_rsa &lt;span class="nt"&gt;--version&lt;/span&gt; 1.5.0

&lt;span class="o"&gt;[&lt;/span&gt;INFO] Uploading generated Nomad configuration ...
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Installing Nomad ...
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Copying configuration files
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Downloading nomad_1.5.0_linux_amd64.zip
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Downloading nomad_1.5.0_SHA256SUMS
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Verifying downloaded nomad_1.5.0_linux_amd64.zip
nomad_1.5.0_linux_amd64.zip: OK
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Unpacking nomad_1.5.0_linux_amd64.zip
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Adding systemd service file /etc/systemd/system/nomad.service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Enabling systemd service
Created symlink /etc/systemd/system/multi-user.target.wants/nomad.service → /etc/systemd/system/nomad.service.
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Starting systemd service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Done.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Une interface graphique est également accessible pour Nomad à cette étape …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jzHfIBYt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AS66J6cN1oEpmXGdLaEQxPQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jzHfIBYt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AS66J6cN1oEpmXGdLaEQxPQ.png" alt="" width="880" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6Vq8VA6c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ALu2janbxBp8bzQoEau7KaA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6Vq8VA6c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ALu2janbxBp8bzQoEau7KaA.png" alt="" width="880" height="553"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lancement de la partie cliente de Consul et Nomad avec trois autres machines virtuelles sous Ubuntu 22.04 LTS dans OpenNebula :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UDozp9X4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ahq7dRU1vkhUN1uNjXoTbgg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UDozp9X4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ahq7dRU1vkhUN1uNjXoTbgg.png" alt="" width="880" height="1608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HIBlvJv7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AjlgPVjdvL4e8BIh2mTN-pw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HIBlvJv7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AjlgPVjdvL4e8BIh2mTN-pw.png" alt="" width="880" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Personnalisation de ces trois machines virtuelles avec Pyinfra, une autre alternative sous Python à Ansible avec pipx :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pyinfra.com/"&gt;pyinfra&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6H3XZVIT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AHDDZzTyXkF4wBXg79GqCYA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6H3XZVIT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AHDDZzTyXkF4wBXg79GqCYA.png" alt="" width="880" height="598"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@minione:~# pipx &lt;span class="nb"&gt;install &lt;/span&gt;pyinfra

  installed package pyinfra 2.6.2, installed using Python 3.10.6
  These apps are now globally available
    - pyinfra
⚠️ Note: &lt;span class="s1"&gt;'/root/.local/bin'&lt;/span&gt; is not on your PATH environment variable. These apps will not be globally accessible &lt;span class="k"&gt;until &lt;/span&gt;your PATH is updated. Run &lt;span class="sb"&gt;`&lt;/span&gt;pipx ensurepath&lt;span class="sb"&gt;`&lt;/span&gt; to
    automatically add it, or manually modify your PATH &lt;span class="k"&gt;in &lt;/span&gt;your shell config file &lt;span class="o"&gt;(&lt;/span&gt;i.e. ~/.bashrc&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt; ✨ 🌟 ✨

root@minione:~# pipx ensurepath

Success! Added /root/.local/bin to the PATH environment variable.

Consider adding shell completions &lt;span class="k"&gt;for &lt;/span&gt;pipx. Run &lt;span class="s1"&gt;'pipx completions'&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;instructions.

You will need to open a new terminal or re-login &lt;span class="k"&gt;for &lt;/span&gt;the PATH changes to take effect.

Otherwise pipx is ready to go! ✨ 🌟 ✨

root@minione:~# &lt;span class="nb"&gt;source&lt;/span&gt; .bashrc

root@minione:~# pyinfra &lt;span class="nt"&gt;--help&lt;/span&gt;

Usage: pyinfra &lt;span class="o"&gt;[&lt;/span&gt;OPTIONS] INVENTORY OPERATIONS...

  pyinfra manages the state of one or more servers. It can be used &lt;span class="k"&gt;for
  &lt;/span&gt;app/service deployment, config management and ad-hoc &lt;span class="nb"&gt;command &lt;/span&gt;execution.

  Documentation: pyinfra.readthedocs.io

  &lt;span class="c"&gt;# INVENTORY&lt;/span&gt;

  + a file &lt;span class="o"&gt;(&lt;/span&gt;inventory.py&lt;span class="o"&gt;)&lt;/span&gt;
  + &lt;span class="nb"&gt;hostname&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;host.net&lt;span class="o"&gt;)&lt;/span&gt;
  + Comma separated hostnames:
    host-1.net,host-2.net,@local

  &lt;span class="c"&gt;# OPERATIONS&lt;/span&gt;

  &lt;span class="c"&gt;# Run one or more deploys against the inventory&lt;/span&gt;
  pyinfra INVENTORY deploy_web.py &lt;span class="o"&gt;[&lt;/span&gt;deploy_db.py]...

  &lt;span class="c"&gt;# Run a single operation against the inventory&lt;/span&gt;
  pyinfra INVENTORY server.user pyinfra &lt;span class="nv"&gt;home&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/home/pyinfra

  &lt;span class="c"&gt;# Execute an arbitrary command against the inventory&lt;/span&gt;
  pyinfra INVENTORY &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"hello world"&lt;/span&gt;

  &lt;span class="c"&gt;# Run one or more facts against the inventory&lt;/span&gt;
  pyinfra INVENTORY fact server.LinuxName &lt;span class="o"&gt;[&lt;/span&gt;server.Users]...
  pyinfra INVENTORY fact files.File &lt;span class="nv"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/path/to/file...

  &lt;span class="c"&gt;# Debug the inventory hosts and data&lt;/span&gt;
  pyinfra INVENTORY debug-inventory

Options:
  &lt;span class="nt"&gt;-v&lt;/span&gt; Print meta &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nt"&gt;-v&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;, input &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nt"&gt;-vv&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; and output
                                  &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nt"&gt;-vvv&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
  &lt;span class="nt"&gt;--dry&lt;/span&gt; Don&lt;span class="s1"&gt;'t execute operations on the target
                                  hosts.
  --limit TEXT Restrict the target hosts by name and group
                                  name.
  --fail-percent INTEGER % of hosts that need to fail before exiting
                                  early.
  --data TEXT Override data values, format key=value.
  --group-data TEXT Paths to load additional group data from
                                  (overrides matching keys).
  --config TEXT Specify config file to use (default:
                                  config.py).
  --chdir TEXT Set the working directory before executing.
  --sudo Whether to execute operations with sudo.
  --sudo-user TEXT Which user to sudo when sudoing.
  --use-sudo-password Whether to use a password with sudo.
  --su-user TEXT Which user to su to.
  --shell-executable TEXT Shell to use (ex: "sh", "cmd", "ps").
  --parallel INTEGER Number of operations to run in parallel.
  --no-wait Don'&lt;/span&gt;t &lt;span class="nb"&gt;wait &lt;/span&gt;between operations &lt;span class="k"&gt;for &lt;/span&gt;hosts.
  &lt;span class="nt"&gt;--serial&lt;/span&gt; Run operations &lt;span class="k"&gt;in &lt;/span&gt;serial, host by host.
  &lt;span class="nt"&gt;--ssh-user&lt;/span&gt;, &lt;span class="nt"&gt;--user&lt;/span&gt; TEXT SSH user to connect as.
  &lt;span class="nt"&gt;--ssh-port&lt;/span&gt;, &lt;span class="nt"&gt;--port&lt;/span&gt; INTEGER SSH port to connect to.
  &lt;span class="nt"&gt;--ssh-key&lt;/span&gt;, &lt;span class="nt"&gt;--key&lt;/span&gt; PATH SSH Private key filename.
  &lt;span class="nt"&gt;--ssh-key-password&lt;/span&gt;, &lt;span class="nt"&gt;--key-password&lt;/span&gt; TEXT
                                  SSH Private key password.
  &lt;span class="nt"&gt;--ssh-password&lt;/span&gt;, &lt;span class="nt"&gt;--password&lt;/span&gt; TEXT
                                  SSH password.
  &lt;span class="nt"&gt;--winrm-username&lt;/span&gt; TEXT WINRM user to connect as.
  &lt;span class="nt"&gt;--winrm-password&lt;/span&gt; TEXT WINRM password.
  &lt;span class="nt"&gt;--winrm-port&lt;/span&gt; TEXT WINRM port to connect to.
  &lt;span class="nt"&gt;--winrm-transport&lt;/span&gt; TEXT WINRM transport &lt;span class="k"&gt;for &lt;/span&gt;use.
  &lt;span class="nt"&gt;--support&lt;/span&gt; Print useful information &lt;span class="k"&gt;for &lt;/span&gt;support and
                                  exit.
  &lt;span class="nt"&gt;--quiet&lt;/span&gt; Hide most pyinfra output.
  &lt;span class="nt"&gt;--debug&lt;/span&gt; Print debug info.
  &lt;span class="nt"&gt;--debug-facts&lt;/span&gt; Print facts after generating operations and
                                  exit.
  &lt;span class="nt"&gt;--debug-operations&lt;/span&gt; Print operations after generating and exit.
  &lt;span class="nt"&gt;--version&lt;/span&gt; Show the version and exit.
  &lt;span class="nt"&gt;--help&lt;/span&gt; Show this message and exit.

root@minione:~# pyinfra 172.16.100.4 &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; hostnamectl set-hostname client0

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Loading config...

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Loading inventory...

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Connecting to hosts...
    No host key &lt;span class="k"&gt;for &lt;/span&gt;172.16.100.4 found &lt;span class="k"&gt;in &lt;/span&gt;known_hosts
    &lt;span class="o"&gt;[&lt;/span&gt;172.16.100.4] Connected
    &lt;span class="o"&gt;[&lt;/span&gt;172.16.100.4] Ready: shell

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Proposed changes:
    Ungrouped:
    &lt;span class="o"&gt;[&lt;/span&gt;172.16.100.4] Operations: 1 Change: 1 No change: 0   

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Beginning operation run...
&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Starting operation: Server/Shell &lt;span class="o"&gt;(&lt;/span&gt;hostnamectl set-hostname client0&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;[&lt;/span&gt;172.16.100.4] Success

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Results:
    Ungrouped:
    &lt;span class="o"&gt;[&lt;/span&gt;172.16.100.4] Changed: 1 No change: 0 Errors: 0   


root@minione:~# pyinfra 172.16.100.3 &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; hostnamectl set-hostname client1

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Loading config...

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Loading inventory...

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Connecting to hosts...
    No host key &lt;span class="k"&gt;for &lt;/span&gt;172.16.100.3 found &lt;span class="k"&gt;in &lt;/span&gt;known_hosts
    &lt;span class="o"&gt;[&lt;/span&gt;172.16.100.3] Connected
    &lt;span class="o"&gt;[&lt;/span&gt;172.16.100.3] Ready: shell

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Proposed changes:
    Ungrouped:
    &lt;span class="o"&gt;[&lt;/span&gt;172.16.100.3] Operations: 1 Change: 1 No change: 0   

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Beginning operation run...
&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Starting operation: Server/Shell &lt;span class="o"&gt;(&lt;/span&gt;hostnamectl set-hostname client1&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;[&lt;/span&gt;172.16.100.3] Success

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Results:
    Ungrouped:
    &lt;span class="o"&gt;[&lt;/span&gt;172.16.100.3] Changed: 1 No change: 0 Errors: 0  

root@minione:~# pyinfra 172.16.100.5 &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; hostnamectl set-hostname client2

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Loading config...

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Loading inventory...

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Connecting to hosts...
    No host key &lt;span class="k"&gt;for &lt;/span&gt;172.16.100.5 found &lt;span class="k"&gt;in &lt;/span&gt;known_hosts
    &lt;span class="o"&gt;[&lt;/span&gt;172.16.100.5] Connected
    &lt;span class="o"&gt;[&lt;/span&gt;172.16.100.5] Ready: shell

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Proposed changes:
    Ungrouped:
    &lt;span class="o"&gt;[&lt;/span&gt;172.16.100.5] Operations: 1 Change: 1 No change: 0   

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Beginning operation run...
&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Starting operation: Server/Shell &lt;span class="o"&gt;(&lt;/span&gt;hostnamectl set-hostname client2&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;[&lt;/span&gt;172.16.100.5] Success

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Results:
    Ungrouped:
    &lt;span class="o"&gt;[&lt;/span&gt;172.16.100.5] Changed: 1 No change: 0 Errors: 0  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Par ce biais, insertion du moteur Docker dans ces machines virtuelles :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@minione:~# cat inventory.py 

my_hosts = ["172.16.100.4", "172.16.100.3", "172.16.100.5"]

root@minione:~# pyinfra inventory.py exec -- "curl -fsSL https://get.docker.com | sh -"

--&amp;gt; Loading config...

--&amp;gt; Loading inventory...

--&amp;gt; Connecting to hosts...
    No host key for 172.16.100.5 found in known_hosts
    No host key for 172.16.100.3 found in known_hosts
    [172.16.100.5] Connected
    No host key for 172.16.100.4 found in known_hosts
    [172.16.100.4] Connected
    [172.16.100.3] Connected
    [172.16.100.4] Ready: shell
    [172.16.100.3] Ready: shell
    [172.16.100.5] Ready: shell

--&amp;gt; Proposed changes:
    Groups: inventory / my_hosts
    [172.16.100.4] Operations: 1 Change: 1 No change: 0   
    [172.16.100.3] Operations: 1 Change: 1 No change: 0   
    [172.16.100.5] Operations: 1 Change: 1 No change: 0   

--&amp;gt; Beginning operation run...
--&amp;gt; Starting operation: Server/Shell (curl -fsSL https://get.docker.com | sh -)
[172.16.100.5] # Executing docker install script, commit: 66474034547a96caa0a25be56051ff8b726a1b28
[172.16.100.5] + sh -c apt-get update -qq &amp;gt;/dev/null
[172.16.100.4] # Executing docker install script, commit: 66474034547a96caa0a25be56051ff8b726a1b28
[172.16.100.4] + sh -c apt-get update -qq &amp;gt;/dev/null
[172.16.100.3] # Executing docker install script, commit: 66474034547a96caa0a25be56051ff8b726a1b28
[172.16.100.3] + sh -c apt-get update -qq &amp;gt;/dev/null
[172.16.100.4] + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl &amp;gt;/dev/null
[172.16.100.5] + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl &amp;gt;/dev/null
[172.16.100.3] + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl &amp;gt;/dev/null
[172.16.100.5] + sh -c mkdir -p /etc/apt/keyrings &amp;amp;&amp;amp; chmod -R 0755 /etc/apt/keyrings
[172.16.100.5] + sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | gpg --dearmor --yes -o /etc/apt/keyrings/docker.gpg
[172.16.100.5] + sh -c chmod a+r /etc/apt/keyrings/docker.gpg
[172.16.100.5] + sh -c echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable" &amp;gt; /etc/apt/sources.list.d/docker.list
[172.16.100.5] + sh -c apt-get update -qq &amp;gt;/dev/null
[172.16.100.4] + sh -c mkdir -p /etc/apt/keyrings &amp;amp;&amp;amp; chmod -R 0755 /etc/apt/keyrings
[172.16.100.3] + sh -c mkdir -p /etc/apt/keyrings &amp;amp;&amp;amp; chmod -R 0755 /etc/apt/keyrings
[172.16.100.4] + sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | gpg --dearmor --yes -o /etc/apt/keyrings/docker.gpg
[172.16.100.3] + sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | gpg --dearmor --yes -o /etc/apt/keyrings/docker.gpg
[172.16.100.3] + sh -c chmod a+r /etc/apt/keyrings/docker.gpg
[172.16.100.3] + sh -c echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable" &amp;gt; /etc/apt/sources.list.d/docker.list
[172.16.100.3] + sh -c apt-get update -qq &amp;gt;/dev/null
[172.16.100.4] + sh -c chmod a+r /etc/apt/keyrings/docker.gpg
[172.16.100.4] + sh -c echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable" &amp;gt; /etc/apt/sources.list.d/docker.list
[172.16.100.4] + sh -c apt-get update -qq &amp;gt;/dev/null
[172.16.100.3] + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq docker-ce docker-ce-cli containerd.io docker-scan-plugin docker-compose-plugin docker-ce-rootless-extras docker-buildx-plugin &amp;gt;/dev/null
[172.16.100.4] + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq docker-ce docker-ce-cli containerd.io docker-scan-plugin docker-compose-plugin docker-ce-rootless-extras docker-buildx-plugin &amp;gt;/dev/null
[172.16.100.5] + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq docker-ce docker-ce-cli containerd.io docker-scan-plugin docker-compose-plugin docker-ce-rootless-extras docker-buildx-plugin &amp;gt;/dev/null
[172.16.100.5] + sh -c docker version
[172.16.100.4] + sh -c docker version
[172.16.100.3] + sh -c docker version
[172.16.100.5] Client: Docker Engine - Community
[172.16.100.5] Version: 23.0.1
[172.16.100.5] API version: 1.42
[172.16.100.5] Go version: go1.19.5
[172.16.100.5] Git commit: a5ee5b1
[172.16.100.5] Built: Thu Feb 9 19:47:01 2023
[172.16.100.5] OS/Arch: linux/amd64
[172.16.100.5] Context: default
[172.16.100.5] 
[172.16.100.5] Server: Docker Engine - Community
[172.16.100.5] Engine:
[172.16.100.5] Version: 23.0.1
[172.16.100.5] API version: 1.42 (minimum version 1.12)
[172.16.100.5] Go version: go1.19.5
[172.16.100.5] Git commit: bc3805a
[172.16.100.5] Built: Thu Feb 9 19:47:01 2023
[172.16.100.5] OS/Arch: linux/amd64
[172.16.100.5] Experimental: false
[172.16.100.5] containerd:
[172.16.100.5] Version: 1.6.18
[172.16.100.5] GitCommit: 2456e983eb9e37e47538f59ea18f2043c9a73640
[172.16.100.5] runc:
[172.16.100.5] Version: 1.1.4
[172.16.100.5] GitCommit: v1.1.4-0-g5fd4c4d
[172.16.100.5] docker-init:
[172.16.100.5] Version: 0.19.0
[172.16.100.5] GitCommit: de40ad0
[172.16.100.5] 
[172.16.100.5] ================================================================================
[172.16.100.5] 
[172.16.100.5] To run Docker as a non-privileged user, consider setting up the
[172.16.100.5] Docker daemon in rootless mode for your user:
[172.16.100.5] 
[172.16.100.5] dockerd-rootless-setuptool.sh install
[172.16.100.5] 
[172.16.100.5] Visit https://docs.docker.com/go/rootless/ to learn about rootless mode.
[172.16.100.5] 
[172.16.100.5] 
[172.16.100.5] To run the Docker daemon as a fully privileged service, but granting non-root
[172.16.100.5] users access, refer to https://docs.docker.com/go/daemon-access/
[172.16.100.5] 
[172.16.100.5] WARNING: Access to the remote API on a privileged Docker daemon is equivalent
[172.16.100.5] to root access on the host. Refer to the 'Docker daemon attack surface'
[172.16.100.5] documentation for details: https://docs.docker.com/go/attack-surface/
[172.16.100.5] 
[172.16.100.5] ================================================================================
[172.16.100.5] 
    [172.16.100.5] Success
[172.16.100.4] Client: Docker Engine - Community
[172.16.100.4] Version: 23.0.1
[172.16.100.4] API version: 1.42
[172.16.100.4] Go version: go1.19.5
[172.16.100.4] Git commit: a5ee5b1
[172.16.100.4] Built: Thu Feb 9 19:47:01 2023
[172.16.100.4] OS/Arch: linux/amd64
[172.16.100.4] Context: default
[172.16.100.4] 
[172.16.100.4] Server: Docker Engine - Community
[172.16.100.4] Engine:
[172.16.100.4] Version: 23.0.1
[172.16.100.4] API version: 1.42 (minimum version 1.12)
[172.16.100.4] Go version: go1.19.5
[172.16.100.4] Git commit: bc3805a
[172.16.100.4] Built: Thu Feb 9 19:47:01 2023
[172.16.100.4] OS/Arch: linux/amd64
[172.16.100.4] Experimental: false
[172.16.100.4] containerd:
[172.16.100.4] Version: 1.6.18
[172.16.100.4] GitCommit: 2456e983eb9e37e47538f59ea18f2043c9a73640
[172.16.100.4] runc:
[172.16.100.4] Version: 1.1.4
[172.16.100.4] GitCommit: v1.1.4-0-g5fd4c4d
[172.16.100.4] docker-init:
[172.16.100.4] Version: 0.19.0
[172.16.100.4] GitCommit: de40ad0
[172.16.100.4] 
[172.16.100.4] ================================================================================
[172.16.100.4] 
[172.16.100.4] To run Docker as a non-privileged user, consider setting up the
[172.16.100.4] Docker daemon in rootless mode for your user:
[172.16.100.4] 
[172.16.100.4] dockerd-rootless-setuptool.sh install
[172.16.100.4] 
[172.16.100.4] Visit https://docs.docker.com/go/rootless/ to learn about rootless mode.
[172.16.100.4] 
[172.16.100.4] 
[172.16.100.4] To run the Docker daemon as a fully privileged service, but granting non-root
[172.16.100.4] users access, refer to https://docs.docker.com/go/daemon-access/
[172.16.100.4] 
[172.16.100.4] WARNING: Access to the remote API on a privileged Docker daemon is equivalent
[172.16.100.4] to root access on the host. Refer to the 'Docker daemon attack surface'
[172.16.100.4] documentation for details: https://docs.docker.com/go/attack-surface/
[172.16.100.4] 
[172.16.100.4] ================================================================================
[172.16.100.4] 
    [172.16.100.4] Success
[172.16.100.3] Client: Docker Engine - Community
[172.16.100.3] Version: 23.0.1
[172.16.100.3] API version: 1.42
[172.16.100.3] Go version: go1.19.5
[172.16.100.3] Git commit: a5ee5b1
[172.16.100.3] Built: Thu Feb 9 19:47:01 2023
[172.16.100.3] OS/Arch: linux/amd64
[172.16.100.3] Context: default
[172.16.100.3] 
[172.16.100.3] Server: Docker Engine - Community
[172.16.100.3] Engine:
[172.16.100.3] Version: 23.0.1
[172.16.100.3] API version: 1.42 (minimum version 1.12)
[172.16.100.3] Go version: go1.19.5
[172.16.100.3] Git commit: bc3805a
[172.16.100.3] Built: Thu Feb 9 19:47:01 2023
[172.16.100.3] OS/Arch: linux/amd64
[172.16.100.3] Experimental: false
[172.16.100.3] containerd:
[172.16.100.3] Version: 1.6.18
[172.16.100.3] GitCommit: 2456e983eb9e37e47538f59ea18f2043c9a73640
[172.16.100.3] runc:
[172.16.100.3] Version: 1.1.4
[172.16.100.3] GitCommit: v1.1.4-0-g5fd4c4d
[172.16.100.3] docker-init:
[172.16.100.3] Version: 0.19.0
[172.16.100.3] GitCommit: de40ad0
[172.16.100.3] 
[172.16.100.3] ================================================================================
[172.16.100.3] 
[172.16.100.3] To run Docker as a non-privileged user, consider setting up the
[172.16.100.3] Docker daemon in rootless mode for your user:
[172.16.100.3] 
[172.16.100.3] dockerd-rootless-setuptool.sh install
[172.16.100.3] 
[172.16.100.3] Visit https://docs.docker.com/go/rootless/ to learn about rootless mode.
[172.16.100.3] 
[172.16.100.3] 
[172.16.100.3] To run the Docker daemon as a fully privileged service, but granting non-root
[172.16.100.3] users access, refer to https://docs.docker.com/go/daemon-access/
[172.16.100.3] 
[172.16.100.3] WARNING: Access to the remote API on a privileged Docker daemon is equivalent
[172.16.100.3] to root access on the host. Refer to the 'Docker daemon attack surface'
[172.16.100.3] documentation for details: https://docs.docker.com/go/attack-surface/
[172.16.100.3] 
[172.16.100.3] ================================================================================
[172.16.100.3] 
    [172.16.100.3] Success

--&amp;gt; Results:
    Groups: inventory / my_hosts
    [172.16.100.4] Changed: 1 No change: 0 Errors: 0   
    [172.16.100.3] Changed: 1 No change: 0 Errors: 0   
    [172.16.100.5] Changed: 1 No change: 0 Errors: 0 

root@minione:~# pyinfra inventory.py exec -- "docker ps -a"

--&amp;gt; Loading config...

--&amp;gt; Loading inventory...

--&amp;gt; Connecting to hosts...
    No host key for 172.16.100.5 found in known_hosts
    No host key for 172.16.100.3 found in known_hosts
    No host key for 172.16.100.4 found in known_hosts
    [172.16.100.5] Connected
    [172.16.100.3] Connected
    [172.16.100.4] Connected
    [172.16.100.5] Ready: shell
    [172.16.100.3] Ready: shell
    [172.16.100.4] Ready: shell

--&amp;gt; Proposed changes:
    Groups: inventory / my_hosts
    [172.16.100.5] Operations: 1 Change: 1 No change: 0   
    [172.16.100.3] Operations: 1 Change: 1 No change: 0   
    [172.16.100.4] Operations: 1 Change: 1 No change: 0   

--&amp;gt; Beginning operation run...
--&amp;gt; Starting operation: Server/Shell (docker ps -a)
[172.16.100.5] CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    [172.16.100.5] Success
[172.16.100.4] CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    [172.16.100.4] Success
[172.16.100.3] CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    [172.16.100.3] Success

--&amp;gt; Results:
    Groups: inventory / my_hosts
    [172.16.100.5] Changed: 1 No change: 0 Errors: 0   
    [172.16.100.3] Changed: 1 No change: 0 Errors: 0   
    [172.16.100.4] Changed: 1 No change: 0 Errors: 0 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Déploiement ensuite des clients Consul et Nomad toujours avec hashi-up :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@minione:~# &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SERVER_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;172.16.100.2
root@minione:~# &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AGENT_1_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;172.16.100.4
root@minione:~# &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AGENT_2_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;172.16.100.3
root@minione:~# &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AGENT_3_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;172.16.100.5

root@minione:~# hashi-up consul &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-addr&lt;/span&gt; &lt;span class="nv"&gt;$AGENT_1_IP&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-user&lt;/span&gt; root &lt;span class="nt"&gt;--retry-join&lt;/span&gt; &lt;span class="nv"&gt;$SERVER_IP&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-key&lt;/span&gt; .ssh/id_rsa &lt;span class="nt"&gt;--version&lt;/span&gt; 1.15.1

&lt;span class="o"&gt;[&lt;/span&gt;INFO] Uploading generated Consul configuration ...
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Installing Consul ...
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Creating user named &lt;span class="s1"&gt;'consul'&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Copying configuration files
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Downloading consul_1.15.1_linux_amd64.zip
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Downloading consul_1.15.1_SHA256SUMS
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Verifying downloaded consul_1.15.1_linux_amd64.zip
consul_1.15.1_linux_amd64.zip: OK
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Unpacking consul_1.15.1_linux_amd64.zip
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Adding systemd service file /etc/systemd/system/consul.service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Enabling systemd service
Created symlink /etc/systemd/system/multi-user.target.wants/consul.service → /etc/systemd/system/consul.service.
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Starting systemd service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Done.

root@minione:~# hashi-up consul &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-addr&lt;/span&gt; &lt;span class="nv"&gt;$AGENT_2_IP&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-user&lt;/span&gt; root &lt;span class="nt"&gt;--retry-join&lt;/span&gt; &lt;span class="nv"&gt;$SERVER_IP&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-key&lt;/span&gt; .ssh/id_rsa &lt;span class="nt"&gt;--version&lt;/span&gt; 1.15.1

&lt;span class="o"&gt;[&lt;/span&gt;INFO] Uploading generated Consul configuration ...
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Installing Consul ...
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Creating user named &lt;span class="s1"&gt;'consul'&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Copying configuration files
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Downloading consul_1.15.1_linux_amd64.zip
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Downloading consul_1.15.1_SHA256SUMS
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Verifying downloaded consul_1.15.1_linux_amd64.zip
consul_1.15.1_linux_amd64.zip: OK
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Unpacking consul_1.15.1_linux_amd64.zip
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Adding systemd service file /etc/systemd/system/consul.service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Enabling systemd service
Created symlink /etc/systemd/system/multi-user.target.wants/consul.service → /etc/systemd/system/consul.service.
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Starting systemd service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Done.

root@minione:~# hashi-up consul &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-addr&lt;/span&gt; &lt;span class="nv"&gt;$AGENT_3_IP&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-user&lt;/span&gt; root &lt;span class="nt"&gt;--retry-join&lt;/span&gt; &lt;span class="nv"&gt;$SERVER_IP&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-key&lt;/span&gt; .ssh/id_rsa &lt;span class="nt"&gt;--version&lt;/span&gt; 1.15.1

&lt;span class="o"&gt;[&lt;/span&gt;INFO] Uploading generated Consul configuration ...
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Installing Consul ...
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Creating user named &lt;span class="s1"&gt;'consul'&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Copying configuration files
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Downloading consul_1.15.1_linux_amd64.zip
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Downloading consul_1.15.1_SHA256SUMS
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Verifying downloaded consul_1.15.1_linux_amd64.zip
consul_1.15.1_linux_amd64.zip: OK
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Unpacking consul_1.15.1_linux_amd64.zip
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Adding systemd service file /etc/systemd/system/consul.service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Enabling systemd service
Created symlink /etc/systemd/system/multi-user.target.wants/consul.service → /etc/systemd/system/consul.service.
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Starting systemd service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Done.

root@minione:~# hashi-up nomad &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-addr&lt;/span&gt; &lt;span class="nv"&gt;$AGENT_1_IP&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-user&lt;/span&gt; root &lt;span class="nt"&gt;--client&lt;/span&gt; &lt;span class="nt"&gt;--retry-join&lt;/span&gt; &lt;span class="nv"&gt;$SERVER_IP&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-key&lt;/span&gt; .ssh/id_rsa &lt;span class="nt"&gt;--version&lt;/span&gt; 1.5.0

&lt;span class="o"&gt;[&lt;/span&gt;INFO] Uploading generated Nomad configuration ...
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Installing Nomad ...
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Copying configuration files
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Downloading nomad_1.5.0_linux_amd64.zip
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Downloading nomad_1.5.0_SHA256SUMS
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Verifying downloaded nomad_1.5.0_linux_amd64.zip
nomad_1.5.0_linux_amd64.zip: OK
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Unpacking nomad_1.5.0_linux_amd64.zip
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Adding systemd service file /etc/systemd/system/nomad.service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Enabling systemd service
Created symlink /etc/systemd/system/multi-user.target.wants/nomad.service → /etc/systemd/system/nomad.service.
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Starting systemd service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Done.

root@minione:~# hashi-up nomad &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-addr&lt;/span&gt; &lt;span class="nv"&gt;$AGENT_2_IP&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-user&lt;/span&gt; root &lt;span class="nt"&gt;--client&lt;/span&gt; &lt;span class="nt"&gt;--retry-join&lt;/span&gt; &lt;span class="nv"&gt;$SERVER_IP&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-key&lt;/span&gt; .ssh/id_rsa &lt;span class="nt"&gt;--version&lt;/span&gt; 1.5.0

&lt;span class="o"&gt;[&lt;/span&gt;INFO] Uploading generated Nomad configuration ...
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Installing Nomad ...
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Copying configuration files
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Downloading nomad_1.5.0_linux_amd64.zip
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Downloading nomad_1.5.0_SHA256SUMS
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Verifying downloaded nomad_1.5.0_linux_amd64.zip
nomad_1.5.0_linux_amd64.zip: OK
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Unpacking nomad_1.5.0_linux_amd64.zip
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Adding systemd service file /etc/systemd/system/nomad.service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Enabling systemd service
Created symlink /etc/systemd/system/multi-user.target.wants/nomad.service → /etc/systemd/system/nomad.service.
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Starting systemd service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Done.

root@minione:~# hashi-up nomad &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-addr&lt;/span&gt; &lt;span class="nv"&gt;$AGENT_3_IP&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-user&lt;/span&gt; root &lt;span class="nt"&gt;--client&lt;/span&gt; &lt;span class="nt"&gt;--retry-join&lt;/span&gt; &lt;span class="nv"&gt;$SERVER_IP&lt;/span&gt; &lt;span class="nt"&gt;--ssh-target-key&lt;/span&gt; .ssh/id_rsa &lt;span class="nt"&gt;--version&lt;/span&gt; 1.5.0

&lt;span class="o"&gt;[&lt;/span&gt;INFO] Uploading generated Nomad configuration ...
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Installing Nomad ...
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Copying configuration files
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Downloading nomad_1.5.0_linux_amd64.zip
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Downloading nomad_1.5.0_SHA256SUMS
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Verifying downloaded nomad_1.5.0_linux_amd64.zip
nomad_1.5.0_linux_amd64.zip: OK
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Unpacking nomad_1.5.0_linux_amd64.zip
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Adding systemd service file /etc/systemd/system/nomad.service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Enabling systemd service
Created symlink /etc/systemd/system/multi-user.target.wants/nomad.service → /etc/systemd/system/nomad.service.
&lt;span class="o"&gt;[&lt;/span&gt;INFO] -&amp;gt; Starting systemd service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Done.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Les clients Nomad avec Consul sont présents :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QYJzLqnL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ALb1qHrmbg9AQvfTBGxQR5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QYJzLqnL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ALb1qHrmbg9AQvfTBGxQR5w.png" alt="" width="880" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9KnyRkjy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ADiXt7v-O7E7tUeRcF9MOKQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9KnyRkjy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ADiXt7v-O7E7tUeRcF9MOKQ.png" alt="" width="880" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XoFiF3zx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AgeUuFaQQYeBtVNcbc4QJzg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XoFiF3zx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AgeUuFaQQYeBtVNcbc4QJzg.png" alt="" width="880" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w7vSeWAk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AAySYSsKhLSQidsWP19Ef9A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w7vSeWAk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AAySYSsKhLSQidsWP19Ef9A.png" alt="" width="880" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lancement de Fabio qui s’intègre nativement avec Consul et fournit une interface Web optionnelle pour visualiser le routage. Le principal cas d’utilisation de Fabio est la distribution des requêtes HTTP(S) et TCP entrantes depuis Internet vers les services frontaux qui peuvent traiter ces requêtes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.hashicorp.com/nomad/tutorials/load-balancing/load-balancing-fabio"&gt;Load Balancing with Fabio | Nomad | HashiCorp Developer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ici, je lance Fabio en tant que job système afin qu’il puisse acheminer le trafic entrant de manière homogène vers un groupe de serveurs quels que soient les nœuds clients sur lesquels il s’exécute. On peut donc placer tous les nœuds clients derrière un équilibreur de charge pour fournir à l’utilisateur final un point d’accès unique.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;job&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"fabio"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;datacenters&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"dc1"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"system"&lt;/span&gt;&lt;span class="w"&gt;

  &lt;/span&gt;&lt;span class="err"&gt;group&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"fabio"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;network&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;port&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"lb"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;static&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;9999&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;port&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ui"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;static&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;9998&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;task&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"fabio"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;driver&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docker"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;image&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"fabiolb/fabio"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;network_mode&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"host"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;ports&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"lb"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s2"&gt;"ui"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

      &lt;/span&gt;&lt;span class="err"&gt;resources&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;cpu&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;memory&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5H3kkH_4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ADG56Gy2WrOLwKk4Ei5Skig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5H3kkH_4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ADG56Gy2WrOLwKk4Ei5Skig.png" alt="" width="880" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j7Vr5HZQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AdqOSg4ROo1lOdnKqUMNuPw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j7Vr5HZQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AdqOSg4ROo1lOdnKqUMNuPw.png" alt="" width="880" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7uqtw5SL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ashf6tdjR4d1I4ZfBotxEkA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7uqtw5SL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ashf6tdjR4d1I4ZfBotxEkA.png" alt="" width="880" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lJTNK17Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A7TbzxpH79k1pWQmsncQbXA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lJTNK17Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A7TbzxpH79k1pWQmsncQbXA.png" alt="" width="880" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;L’interface graphique de Fabio est disponible sur chacun des trois noeuds clients sur le port TCP 9998 :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kb0atae0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AkeaehZGrgTUf9GK7XdmbPw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kb0atae0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AkeaehZGrgTUf9GK7XdmbPw.png" alt="" width="880" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Exemple avec ce job et une image de base Nginx avec création de trois serveurs attenants :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;job&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"webserver"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;datacenters&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"dc1"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"service"&lt;/span&gt;&lt;span class="w"&gt;

  &lt;/span&gt;&lt;span class="err"&gt;group&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"webserver"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;count&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;network&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;port&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="err"&gt;service&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"nginx-webserver"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;tags&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"urlprefix-/"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;port&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;check&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"alive"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;interval&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"10s"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;timeout&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2s"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="err"&gt;restart&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;attempts&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;interval&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"30m"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;delay&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"15s"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;mode&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"fail"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="err"&gt;task&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"nginx"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;driver&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docker"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;image&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bitnami/nginx:latest"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;ports&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"http"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--klpl3Yt4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A31VpfVOOhm9VDqBQ2awkgg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--klpl3Yt4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A31VpfVOOhm9VDqBQ2awkgg.png" alt="" width="880" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zANhzLm---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A5zuLmiNFinr6ZnjFX0rBfw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zANhzLm---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A5zuLmiNFinr6ZnjFX0rBfw.png" alt="" width="880" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XwGQ4L85--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AelVeN-hc6aW9lhwFqFTPbg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XwGQ4L85--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AelVeN-hc6aW9lhwFqFTPbg.png" alt="" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ppepYSFC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A5_KgBEHs5UvUwgCPS-ZezA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ppepYSFC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A5_KgBEHs5UvUwgCPS-ZezA.png" alt="" width="880" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qAo1crYG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AnRbaioUFYNaSvHaz4K13cA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qAo1crYG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AnRbaioUFYNaSvHaz4K13cA.png" alt="" width="880" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fabio a détecté le déploiement de ces trois serveurs Nginx :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jcYxr4Qo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AsSEKwVlfdTbbOKMwdfcznw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jcYxr4Qo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AsSEKwVlfdTbbOKMwdfcznw.png" alt="" width="880" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;en fournissant des ports d’accès à ces entités :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J0UWkdwo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ACNF076uc1Y2S0y0luIBuHw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J0UWkdwo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ACNF076uc1Y2S0y0luIBuHw.png" alt="" width="880" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9Lmjqb_F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AgOWriF4NzdeUBZSkD-_5uA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9Lmjqb_F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AgOWriF4NzdeUBZSkD-_5uA.png" alt="" width="880" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Oc9i8SG3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ACTvlh_7PaiMxDNtPOm9YSA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Oc9i8SG3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ACTvlh_7PaiMxDNtPOm9YSA.png" alt="" width="880" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Test d’une instance Alpine Linux avec un serveur OpenSSH qui mime le fonctionnement d’une machine virtuelle dans Nomad en utilisant cette image Docker :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hub.docker.com/r/mcas/alpine-sshd"&gt;Docker&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;job&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ssh"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;datacenters&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"dc1"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"service"&lt;/span&gt;&lt;span class="w"&gt;

  &lt;/span&gt;&lt;span class="err"&gt;group&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ssh"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;count&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;network&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;port&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ssh"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="err"&gt;service&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sshd"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;port&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ssh"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="err"&gt;task&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"openssh"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;driver&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docker"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;image&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mcas/alpine-sshd:latest"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;ports&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"ssh"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Exécution du job via la dashboard du serveur Nomad :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hBnERiAe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AIl7nhifyaDTS_FfZH96E7w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hBnERiAe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AIl7nhifyaDTS_FfZH96E7w.png" alt="" width="880" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OuHZoyd8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ak91zDJ1s3EJetXF730AXfw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OuHZoyd8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ak91zDJ1s3EJetXF730AXfw.png" alt="" width="880" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t39HdIDm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AB8he3YlXW7HFVEUUYiiNpA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t39HdIDm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AB8he3YlXW7HFVEUUYiiNpA.png" alt="" width="880" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Consul voit l’ensemble des instances déployées dans Nomad :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OwFblxF1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ard9VZ6K0CJX6HpljFFbruw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OwFblxF1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ard9VZ6K0CJX6HpljFFbruw.png" alt="" width="880" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;en me donnant les informations sur le port d’accès à cette instance Alpine Linux :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uv0NM9ZK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Amow4a4H2thytRuVoV-Kg3w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uv0NM9ZK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Amow4a4H2thytRuVoV-Kg3w.png" alt="" width="880" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Connexion locale à cette instance en SSH avec ce port d’accès :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@minione:~# ssh &lt;span class="nt"&gt;-p&lt;/span&gt; 26053 alpine@172.16.100.5
Warning: Permanently added &lt;span class="s1"&gt;'[172.16.100.5]:26053'&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;ED25519&lt;span class="o"&gt;)&lt;/span&gt; to the list of known hosts.
alpine@172.16.100.5 password: 
Welcome to Alpine!

The Alpine Wiki contains a large amount of how-to guides and general
information about administrating Alpine systems.
See &amp;lt;https://wiki.alpinelinux.org/&amp;gt;.

You can setup the system with the &lt;span class="nb"&gt;command&lt;/span&gt;: setup-alpine

You may change this message by editing /etc/motd.

1900f3e8b78a:~&lt;span class="nv"&gt;$ &lt;/span&gt;ps aux
PID USER TIME COMMAND
    1 root 0:00 sshd: /usr/sbin/sshd &lt;span class="nt"&gt;-D&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;listener] 0 of 10-100 startups
   10 root 0:00 sshd: alpine &lt;span class="o"&gt;[&lt;/span&gt;priv]
   12 alpine 0:00 sshd: alpine@pts/0
   13 alpine 0:00 &lt;span class="nt"&gt;-bash&lt;/span&gt;
   14 alpine 0:00 ps aux
1900f3e8b78a:~&lt;span class="nv"&gt;$ &lt;/span&gt;free &lt;span class="nt"&gt;-m&lt;/span&gt;
              total used free shared buff/cache available
Mem: 5934 348 4982 1 604 5346
Swap: 0 0 0
1900f3e8b78a:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;df&lt;/span&gt; &lt;span class="nt"&gt;-h&lt;/span&gt;
Filesystem Size Used Available Use% Mounted on
overlay 19.2G 2.3G 16.9G 12% /
tmpfs 64.0M 0 64.0M 0% /dev
shm 64.0M 0 64.0M 0% /dev/shm
/dev/vda1 19.2G 2.3G 16.9G 12% /alloc
/dev/vda1 19.2G 2.3G 16.9G 12% /local
tmpfs 1.0M 0 1.0M 0% /secrets
/dev/vda1 19.2G 2.3G 16.9G 12% /etc/resolv.conf
/dev/vda1 19.2G 2.3G 16.9G 12% /etc/hostname
/dev/vda1 19.2G 2.3G 16.9G 12% /etc/hosts
tmpfs 2.9G 0 2.9G 0% /proc/acpi
tmpfs 64.0M 0 64.0M 0% /proc/kcore
tmpfs 64.0M 0 64.0M 0% /proc/keys
tmpfs 64.0M 0 64.0M 0% /proc/timer_list
tmpfs 2.9G 0 2.9G 0% /proc/scsi
tmpfs 2.9G 0 2.9G 0% /sys/firmware
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Le sempiternel démonstrateur FC s’y exécute localement comme s’il s’agissait d’une machine virtuelle :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/france-connect/service-provider-example"&gt;GitHub - france-connect/service-provider-example: An implementation example of the FranceConnect button on a service provider's website.&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;1900f3e8b78a:~/fcdemo3&lt;span class="nv"&gt;$ &lt;/span&gt;pm2 start start.sh 

                        &lt;span class="nt"&gt;-------------&lt;/span&gt;

__/&lt;span class="se"&gt;\\\\\\\\\\\\\_&lt;/span&gt;___ /&lt;span class="se"&gt;\\\\&lt;/span&gt; ____________/&lt;span class="se"&gt;\\\\&lt;/span&gt;____ /&lt;span class="se"&gt;\\\\\\\\\ &lt;/span&gt;_____
 _&lt;span class="se"&gt;\/\\\/&lt;/span&gt;////////&lt;span class="se"&gt;\\\_\/\\\\\\&lt;/span&gt; ________/&lt;span class="se"&gt;\\\\\\&lt;/span&gt;__ /&lt;span class="se"&gt;\\\/&lt;/span&gt;//////&lt;span class="se"&gt;\\\_&lt;/span&gt;__
  _&lt;span class="se"&gt;\/\\\ &lt;/span&gt;_______&lt;span class="se"&gt;\/\\\_\/\\\/&lt;/span&gt;/&lt;span class="se"&gt;\\\_&lt;/span&gt;___ /&lt;span class="se"&gt;\\\/&lt;/span&gt;/&lt;span class="se"&gt;\\\_\/&lt;/span&gt;// ______&lt;span class="se"&gt;\/&lt;/span&gt;/&lt;span class="se"&gt;\\\_&lt;/span&gt;_
   _&lt;span class="se"&gt;\/\\\\\\\\\\\\\/&lt;/span&gt; __&lt;span class="se"&gt;\/\\\\&lt;/span&gt;///&lt;span class="se"&gt;\\\/\\\/&lt;/span&gt;_&lt;span class="se"&gt;\/\\\_&lt;/span&gt;__________ /&lt;span class="se"&gt;\\\/&lt;/span&gt;___
    _&lt;span class="se"&gt;\/\\\/&lt;/span&gt;//////// ____&lt;span class="se"&gt;\/\\\_&lt;/span&gt;_ &lt;span class="se"&gt;\/&lt;/span&gt;//&lt;span class="se"&gt;\\\/&lt;/span&gt; ___&lt;span class="se"&gt;\/\\\_&lt;/span&gt;_______ /&lt;span class="se"&gt;\\\/&lt;/span&gt;/ _____
     _&lt;span class="se"&gt;\/\\\ &lt;/span&gt;_____________&lt;span class="se"&gt;\/\\\_&lt;/span&gt;___ &lt;span class="se"&gt;\/&lt;/span&gt;// _____&lt;span class="se"&gt;\/\\\_&lt;/span&gt;____ /&lt;span class="se"&gt;\\\/&lt;/span&gt;/ ________
      _&lt;span class="se"&gt;\/\\\ &lt;/span&gt;_____________&lt;span class="se"&gt;\/\\\_&lt;/span&gt;____________ &lt;span class="se"&gt;\/\\\ &lt;/span&gt;___/&lt;span class="se"&gt;\\\/&lt;/span&gt;___________
       _&lt;span class="se"&gt;\/\\\ &lt;/span&gt;_____________&lt;span class="se"&gt;\/\\\_&lt;/span&gt;____________ &lt;span class="se"&gt;\/\\\_&lt;/span&gt;_/&lt;span class="se"&gt;\\\\\\\\\\\\\\\_&lt;/span&gt;
        _&lt;span class="se"&gt;\/&lt;/span&gt;// ______________&lt;span class="se"&gt;\/&lt;/span&gt;//______________ &lt;span class="se"&gt;\/&lt;/span&gt;// __&lt;span class="se"&gt;\/&lt;/span&gt;//////////////__

                          Runtime Edition

        PM2 is a Production Process Manager &lt;span class="k"&gt;for &lt;/span&gt;Node.js applications
                     with a built-in Load Balancer.

                Start and Daemonize any application:
                &lt;span class="nv"&gt;$ &lt;/span&gt;pm2 start app.js

                Load Balance 4 instances of api.js:
                &lt;span class="nv"&gt;$ &lt;/span&gt;pm2 start api.js &lt;span class="nt"&gt;-i&lt;/span&gt; 4

                Monitor &lt;span class="k"&gt;in &lt;/span&gt;production:
                &lt;span class="nv"&gt;$ &lt;/span&gt;pm2 monitor

                Make pm2 auto-boot at server restart:
                &lt;span class="nv"&gt;$ &lt;/span&gt;pm2 startup

                To go further checkout:
                http://pm2.io/

                        &lt;span class="nt"&gt;-------------&lt;/span&gt;

&lt;span class="o"&gt;[&lt;/span&gt;PM2] Spawning PM2 daemon with &lt;span class="nv"&gt;pm2_home&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/home/alpine/.pm2
&lt;span class="o"&gt;[&lt;/span&gt;PM2] PM2 Successfully daemonized
&lt;span class="o"&gt;[&lt;/span&gt;PM2] Starting /home/alpine/fcdemo3/start.sh &lt;span class="k"&gt;in &lt;/span&gt;fork_mode &lt;span class="o"&gt;(&lt;/span&gt;1 instance&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;PM2] Done.
┌─────┬──────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ &lt;span class="nb"&gt;id&lt;/span&gt; │ name │ namespace │ version │ mode │ pid │ &lt;span class="nb"&gt;uptime&lt;/span&gt; │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ start │ default │ 0.0.0 │ fork │ 466 │ 0s │ 0 │ online │ 0% │ 1.7mb │ alpine │ disabled │
└─────┴──────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘

1900f3e8b78a:~/fcdemo3&lt;span class="nv"&gt;$ &lt;/span&gt;pm2 log start

&lt;span class="o"&gt;[&lt;/span&gt;TAILING] Tailing last 15 lines &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;start] process &lt;span class="o"&gt;(&lt;/span&gt;change the value with &lt;span class="nt"&gt;--lines&lt;/span&gt; option&lt;span class="o"&gt;)&lt;/span&gt;
/home/alpine/.pm2/logs/start-error.log last 15 lines:
/home/alpine/.pm2/logs/start-out.log last 15 lines:
0|start | 
0|start | &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; service-provider-mock@0.0.0 start
0|start | &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; node ./bin/www
0|start | 
0|start | Server listening on http://0.0.0.0:3000

1900f3e8b78a:~&lt;span class="nv"&gt;$ &lt;/span&gt;curl http://localhost:3000
&amp;lt;&lt;span class="o"&gt;!&lt;/span&gt;doctype html&amp;gt;
&amp;lt;html &lt;span class="nv"&gt;lang&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"en"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&amp;lt;&lt;span class="nb"&gt;head&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;meta &lt;span class="nv"&gt;charset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"UTF-8"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;meta &lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"viewport"&lt;/span&gt;
          &lt;span class="nv"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;meta http-equiv&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"X-UA-Compatible"&lt;/span&gt; &lt;span class="nv"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ie=edge"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;&lt;span class="nb"&gt;link &lt;/span&gt;&lt;span class="nv"&gt;rel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"stylesheet"&lt;/span&gt; &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.1/css/bulma.min.css"&lt;/span&gt; &lt;span class="nv"&gt;integrity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"sha256-zIG416V1ynj3Wgju/scU80KAEWOsO5rRLfVyRDuOv7Q="&lt;/span&gt; &lt;span class="nv"&gt;crossorigin&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"anonymous"&lt;/span&gt; /&amp;gt;
    &amp;lt;title&amp;gt;Démonstrateur Fournisseur de Service&amp;lt;/title&amp;gt;
&amp;lt;/head&amp;gt;

&amp;lt;body&amp;gt;
&amp;lt;nav &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"navbar"&lt;/span&gt; &lt;span class="nv"&gt;role&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"navigation"&lt;/span&gt; aria-label&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"main navigation"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"navbar-start"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"navbar-brand"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
            &amp;lt;a &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"navbar-item"&lt;/span&gt; &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
                &amp;lt;img &lt;span class="nv"&gt;src&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/img/fc_logo_v2.png"&lt;/span&gt; &lt;span class="nv"&gt;alt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Démonstrateur Fournisseur de Service"&lt;/span&gt; &lt;span class="nv"&gt;height&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"28"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
            &amp;lt;/a&amp;gt;
        &amp;lt;/div&amp;gt;
        &amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/"&lt;/span&gt; &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"navbar-item"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
            Home
        &amp;lt;/a&amp;gt;
    &amp;lt;/div&amp;gt;
    &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"navbar-end"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"navbar-item"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;

                &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"buttons"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
                    &amp;lt;a &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"button is-light"&lt;/span&gt; &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/login"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;Se connecter&amp;lt;/a&amp;gt;
                &amp;lt;/div&amp;gt;

        &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
&amp;lt;/nav&amp;gt;

&amp;lt;section &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"hero is-info is-medium"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"hero-body"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"container"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
            &amp;lt;h1 &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"title"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
                Bienvenue sur le démonstrateur de fournisseur de service
            &amp;lt;/h1&amp;gt;
            &amp;lt;h2 &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"subtitle"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
                Cliquez sur &lt;span class="s2"&gt;"se connecter"&lt;/span&gt; pour vous connecter via &amp;lt;strong&amp;gt;FranceConnect&amp;lt;/strong&amp;gt;
            &amp;lt;/h2&amp;gt;
        &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
&amp;lt;/section&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Dans cet article, il est décrit la possibilité d’utiliser hashi-up pour provisionner un cluster avec Vault ou en mode HA pour le cluster Nomad et Consul :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CbBydCZB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/874/0%2AuGGpp8Io-iuLF-Xq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CbBydCZB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/874/0%2AuGGpp8Io-iuLF-Xq.png" alt="" width="874" height="617"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://johansiebens.dev/posts/2020/07/deploying-a-highly-available-nomad-cluster-with-hashi-up/"&gt;Deploying a highly-available Nomad cluster with hashi-up!&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://johansiebens.dev/posts/2020/12/installing-hashicorp-vault-on-digitalocean-with-hashi-up/"&gt;Installing HashiCorp Vault on DigitalOcean with hashi-up&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Ce cluster commun entre Nomad et Consul aurait pû être utilisé avec Waypoint pour y provisionner ses jobs comme avec cet exemple de fichier HCL …&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://developer.hashicorp.com/waypoint"&gt;Waypoint | HashiCorp Developer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iSPfJB0G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ADFD-Qcr6yk8QOzmFsdUFdw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iSPfJB0G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ADFD-Qcr6yk8QOzmFsdUFdw.png" alt="" width="880" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.hashicorp.com/waypoint/tutorials/get-started-nomad/get-started-nomad"&gt;Deploy an Application to Nomad | Waypoint | HashiCorp Developer&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Copyright&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;(c)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;HashiCorp,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Inc.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;SPDX-License-Identifier:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;MPL&lt;/span&gt;&lt;span class="mf"&gt;-2.0&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;project&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"nomad-nodejs"&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;app&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"nomad-nodejs-web"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;

  &lt;/span&gt;&lt;span class="err"&gt;build&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;use&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"pack"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;registry&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;use&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docker"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;image&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"nomad-nodejs-web"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;tag&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;local&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

  &lt;/span&gt;&lt;span class="err"&gt;deploy&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;use&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"nomad"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;these&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;options&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;both&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;default&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;values&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;shown&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;but&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;are&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;left&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;here&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;show&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;they&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;are&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;configurable&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;datacenter&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"dc1"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;namespace&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On en allant beaucoup plus loin avec cet exemple de déploiement Blue/Green …&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.redhat.com/en/topics/devops/what-is-blue-green-deployment"&gt;What is blue green deployment?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.infralovers.com/en/articles/2022/10/05/bluegreen-deployments-with-nomad-and-waypoint/"&gt;Blue/Green Deployments with HashiCorp Nomad and Waypoint&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SFKQYwyf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AYsnvV7z3Js-4uZOT-0S7bg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SFKQYwyf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AYsnvV7z3Js-4uZOT-0S7bg.png" alt="" width="880" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;À suivre !&lt;/p&gt;

</description>
      <category>consul</category>
      <category>nomad</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Effectuer des opérations de base autour des machines virtuelles avec kubevirt-manager …</title>
      <dc:creator>Karim</dc:creator>
      <pubDate>Thu, 09 Mar 2023 14:35:11 +0000</pubDate>
      <link>https://forem.com/deep75/effectuer-des-operations-de-base-autour-des-machines-virtuelles-avec-kubevirt-manager--25oo</link>
      <guid>https://forem.com/deep75/effectuer-des-operations-de-base-autour-des-machines-virtuelles-avec-kubevirt-manager--25oo</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xraSNX-V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AK3Zu0cib5b0B9D2j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xraSNX-V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AK3Zu0cib5b0B9D2j.png" alt="" width="880" height="648"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubevirt permet comme on l’a vu dans plusieurs articles précedemment de fournir une plateforme de développement unifiée où les développeurs peuvent construire, modifier et déployer des applications résidant à la fois dans des conteneurs d’application et dans des machines virtuelles dans un environnement commun et partagé.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubevirt.io/"&gt;KubeVirt.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://deep75.medium.com/deployer-kubevirt-en-utilisant-hyperconverged-cluster-operator-hco-et-g%C3%A9rer-ses-vms-dans-8fcb128be39d"&gt;Deployer KubeVirt en utilisant Hyperconverged Cluster Operator (HCO) et gérer ses VMs dans…&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AewwDkCB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/900/0%2AvFvPPdQjh-kabZDM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AewwDkCB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/900/0%2AvFvPPdQjh-kabZDM.png" alt="" width="880" height="425"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;a href="https://kubevirt.io/2020/KubeVirt_deep_dive-virtualized_gpu_workloads.html"&gt;https://kubevirt.io/2020/KubeVirt_deep_dive-virtualized_gpu_workloads.html&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Même si d’autres solutions alternatives de création et d’utilisations de machines virtuelles existent pour Kubernetes :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/deep75/neonvm-api-de-virtualisation-legere-basee-sur-qemu-pour-kubernetes--3nnd"&gt;NeonVM : API de virtualisation légère basée sur QEMU pour Kubernetes …&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/deep75/kubefire-creer-et-gerer-des-clusters-kubernetes-en-utilisant-des-microvms-avec-firecracker--2jge"&gt;KubeFire : Créer et gèrer des clusters Kubernetes en utilisant des microVMs avec Firecracker …&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/deep75/virtink-un-module-complementaire-de-virtualisation-legere-pour-kubernetes--2fk6"&gt;Virtink : un module complémentaire de virtualisation légère pour Kubernetes …&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Une solution existait pour gérer ses machines virtuelles graphiquement dans Kubernetes et KubeVirt avec OpenShift Web Console :&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubevirt.io/2020/OKD-web-console-install.html"&gt;Managing KubeVirt with OpenShift Web Console | KubeVirt.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.okd.io/latest/virt/virtual_machines/virt-accessing-vm-consoles.html"&gt;Accessing virtual machine consoles - Virtual machines | Virtualization | OKD 4&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/deep75/apercu-de-microshift-une-implementation-legere-d-openshift-4jim"&gt;Aperçu de Microshift, une implémentation légère d’OpenShift …&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hyfApHqP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/500/0%2A8hmr_O55uoiBH_Pj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hyfApHqP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/500/0%2A8hmr_O55uoiBH_Pj.jpg" alt="" width="500" height="544"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ici focus sur KubeVirt Manager, un projet récent qui fournit un Frontend Angular simple pour faire fonctionner KubeVirt.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Cet outil vous permet d’effectuer des opérations de base autour des machines virtuelles, des instances de machines virtuelles et des disques :&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://github.com/kubevirt-manager/kubevirt-manager"&gt;GitHub - kubevirt-manager/kubevirt-manager: Kubevirt Web UI / Web Interface Manager&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Application rapide avec le déploiement d’un cluster AKS dans Azure avec un Scale Set utilisant des instances Spot et ces paramètres :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-cli"&gt;Azure deployment templates with Azure CLI - Azure Resource Manager - Azure Resource Manager&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://learn.microsoft.com/en-us/azure/aks/spot-node-pool"&gt;Add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster - Azure Kubernetes Service&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"$schema"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"contentVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"parameters"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"resourceName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AKS"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"location"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"francecentral"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"dnsPrefix"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AKS-dns"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"kubernetesVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.26.0"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"networkPlugin"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"kubenet"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"enableRBAC"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"nodeResourceGroup"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MC_RG-AKS_AKS_francecentral"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"upgradeChannel"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"none"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"adminGroupObjectIDs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"disableLocalAccounts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"azureRbac"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"enablePrivateCluster"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"enableHttpApplicationRouting"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"enableAzurePolicy"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"enableSecretStoreCSIDriver"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"vmssNodePool"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://deep75.medium.com/mise-en-oeuvre-dun-cluster-kubernetes-1-14-avec-des-noeuds-windows-via-aks-engine-8c7400a7037c"&gt;Mise en oeuvre d’un cluster Kubernetes 1.14 avec des noeuds Windows via AKS-Engine …&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--karrghu_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ApshgUiqxLac4_mO8Az4yBg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--karrghu_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ApshgUiqxLac4_mO8Az4yBg.png" alt="" width="880" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Le cluster AKS est déployé avec récupération des identifiants :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;az aks get-credentials &lt;span class="nt"&gt;--resource-group&lt;/span&gt; RG-AKS &lt;span class="nt"&gt;--name&lt;/span&gt; AKS
Merged &lt;span class="s2"&gt;"AKS"&lt;/span&gt; as current context &lt;span class="k"&gt;in&lt;/span&gt; /home/cert/.kube/config

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl cluster-info
Kubernetes control plane is running at https://aks-dns-bn9a54nt.hcp.francecentral.azmk8s.io:443
CoreDNS is running at https://aks-dns-bn9a54nt.hcp.francecentral.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://aks-dns-bn9a54nt.hcp.francecentral.azmk8s.io:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use &lt;span class="s1"&gt;'kubectl cluster-info dump'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po,svc &lt;span class="nt"&gt;-A&lt;/span&gt;
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/azure-ip-masq-agent-6g72l 1/1 Running 0 6m18s
kube-system pod/azure-ip-masq-agent-nzbf5 1/1 Running 0 6m20s
kube-system pod/azure-ip-masq-agent-rj6mj 1/1 Running 0 6m6s
kube-system pod/cloud-node-manager-rfpcf 1/1 Running 0 6m18s
kube-system pod/cloud-node-manager-rg5jm 1/1 Running 0 6m6s
kube-system pod/cloud-node-manager-zpsgs 1/1 Running 0 6m20s
kube-system pod/coredns-78669d9946-txcm8 1/1 Running 0 4m26s
kube-system pod/coredns-78669d9946-zt9b5 1/1 Running 0 6m30s
kube-system pod/coredns-autoscaler-5b5c4f5b4f-zl2hq 1/1 Running 0 6m30s
kube-system pod/csi-azuredisk-node-44ffr 3/3 Running 0 6m20s
kube-system pod/csi-azuredisk-node-m2s5r 3/3 Running 0 6m18s
kube-system pod/csi-azuredisk-node-v4dc8 3/3 Running 0 6m6s
kube-system pod/csi-azurefile-node-5pzl4 3/3 Running 0 6m18s
kube-system pod/csi-azurefile-node-mqxw4 3/3 Running 0 6m6s
kube-system pod/csi-azurefile-node-nfvct 3/3 Running 0 6m20s
kube-system pod/konnectivity-agent-f94c65c6f-bjcwq 1/1 Running 0 6m29s
kube-system pod/konnectivity-agent-f94c65c6f-rdbfl 1/1 Running 0 6m29s
kube-system pod/kube-proxy-52svq 1/1 Running 0 6m18s
kube-system pod/kube-proxy-cnplk 1/1 Running 0 6m6s
kube-system pod/kube-proxy-dnjjp 1/1 Running 0 6m20s
kube-system pod/metrics-server-5f6654d4df-nqj2w 2/2 Running 0 4m20s
kube-system pod/metrics-server-5f6654d4df-qk22x 2/2 Running 0 4m20s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
default service/kubernetes ClusterIP 10.0.0.1 &amp;lt;none&amp;gt; 443/TCP 7m4s
kube-system service/kube-dns ClusterIP 10.0.0.10 &amp;lt;none&amp;gt; 53/UDP,53/TCP 6m31s
kube-system service/metrics-server ClusterIP 10.0.139.110 &amp;lt;none&amp;gt; 443/TCP 6m30s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
aks-agentpool-16963728-vmss000000 Ready agent 6m15s v1.26.0 10.224.0.5 &amp;lt;none&amp;gt; Ubuntu 22.04.1 LTS 5.15.0-1033-azure containerd://1.6.17+azure-1
aks-userpool-16963728-vmss000000 Ready agent 6m27s v1.26.0 10.224.0.4 20.111.27.134 Ubuntu 22.04.1 LTS 5.15.0-1033-azure containerd://1.6.17+azure-1
aks-userpool-16963728-vmss000001 Ready agent 6m29s v1.26.0 10.224.0.6 20.111.27.202 Ubuntu 22.04.1 LTS 5.15.0-1033-azure containerd://1.6.17+azure-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;J’utilise ici un pool de Noeud Spot avec des instances Esv5 qui autorise la virtualisation imbriquée pour cette expérience :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/virtual-machines/ev5-esv5-series"&gt;Ev5 and Esv5-series - Azure Virtual Machines&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1rQg5Dab--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/952/1%2Avc3ahIgRfWN5axw_YH3NEA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1rQg5Dab--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/952/1%2Avc3ahIgRfWN5axw_YH3NEA.png" alt="" width="880" height="596"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ce qui permet le déploiement de KubeVirt via HCO (Hyperconverged Cluster Operator) :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kubevirt/hyperconverged-cluster-operator"&gt;GitHub - kubevirt/hyperconverged-cluster-operator: Operator pattern for managing multi-operator products&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s2tub3D6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/831/0%2A7xvtc18DqlXZFKKM.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s2tub3D6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/831/0%2A7xvtc18DqlXZFKKM.jpg" alt="" width="831" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;avec ce script unifié :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/deploy.sh | bash

&lt;span class="nv"&gt;$ &lt;/span&gt;curl https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/deploy.sh | bash

+ &lt;span class="nv"&gt;hco_namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kubevirt-hyperconverged
+ &lt;span class="nv"&gt;IS_OPENSHIFT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;
+ kubectl api-resources
+ &lt;span class="nb"&gt;grep &lt;/span&gt;clusterversions
+ &lt;span class="nb"&gt;grep &lt;/span&gt;config.openshift.io
+ kubectl create ns kubevirt-hyperconverged &lt;span class="nt"&gt;--dry-run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
+ kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; -
W0309 13:38:03.250568 3638 helpers.go:677] &lt;span class="nt"&gt;--dry-run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true &lt;/span&gt;is deprecated &lt;span class="o"&gt;(&lt;/span&gt;boolean value&lt;span class="o"&gt;)&lt;/span&gt; and can be replaced with &lt;span class="nt"&gt;--dry-run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;client.
namespace/kubevirt-hyperconverged created
+ &lt;span class="nv"&gt;namespaces&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;&lt;span class="s2"&gt;"openshift"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
+ &lt;span class="k"&gt;for &lt;/span&gt;namespace &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;namespaces&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
++ kubectl get ns openshift
Error from server &lt;span class="o"&gt;(&lt;/span&gt;NotFound&lt;span class="o"&gt;)&lt;/span&gt;: namespaces &lt;span class="s2"&gt;"openshift"&lt;/span&gt; not found
+ &lt;span class="o"&gt;[[&lt;/span&gt;&lt;span class="s1"&gt;''&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt;&lt;span class="o"&gt;]]&lt;/span&gt;
+ kubectl create ns openshift &lt;span class="nt"&gt;--dry-run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
+ kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; -
W0309 13:38:04.072929 3667 helpers.go:677] &lt;span class="nt"&gt;--dry-run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true &lt;/span&gt;is deprecated &lt;span class="o"&gt;(&lt;/span&gt;boolean value&lt;span class="o"&gt;)&lt;/span&gt; and can be replaced with &lt;span class="nt"&gt;--dry-run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;client.
namespace/openshift created
+ &lt;span class="nv"&gt;LABEL_SELECTOR_ARG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
+ &lt;span class="s1"&gt;'['&lt;/span&gt; &lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="s1"&gt;'!='&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="s1"&gt;']'&lt;/span&gt;
+ &lt;span class="nv"&gt;LABEL_SELECTOR_ARG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'-l name!=ssp-operator,name!=hyperconverged-cluster-cli-download'&lt;/span&gt;
+ kubectl apply &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'name!=ssp-operator,name!=hyperconverged-cluster-cli-download'&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/cluster-network-addons00.crd.yaml
customresourcedefinition.apiextensions.k8s.io/networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io created
+ kubectl apply &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'name!=ssp-operator,name!=hyperconverged-cluster-cli-download'&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/containerized-data-importer00.crd.yaml
customresourcedefinition.apiextensions.k8s.io/cdis.cdi.kubevirt.io created
+ kubectl apply &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'name!=ssp-operator,name!=hyperconverged-cluster-cli-download'&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/hco00.crd.yaml
customresourcedefinition.apiextensions.k8s.io/hyperconvergeds.hco.kubevirt.io created
+ kubectl apply &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'name!=ssp-operator,name!=hyperconverged-cluster-cli-download'&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/kubevirt00.crd.yaml
customresourcedefinition.apiextensions.k8s.io/kubevirts.kubevirt.io created
+ kubectl apply &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'name!=ssp-operator,name!=hyperconverged-cluster-cli-download'&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/hostpath-provisioner00.crd.yaml
customresourcedefinition.apiextensions.k8s.io/hostpathprovisioners.hostpathprovisioner.kubevirt.io created
+ kubectl apply &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'name!=ssp-operator,name!=hyperconverged-cluster-cli-download'&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/scheduling-scale-performance00.crd.yaml
customresourcedefinition.apiextensions.k8s.io/ssps.ssp.kubevirt.io created
+ kubectl apply &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'name!=ssp-operator,name!=hyperconverged-cluster-cli-download'&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/tekton-tasks-operator00.crd.yaml
customresourcedefinition.apiextensions.k8s.io/tektontasks.tektontasks.kubevirt.io created
+ kubectl apply &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'name!=ssp-operator,name!=hyperconverged-cluster-cli-download'&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/cert-manager.yaml
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
namespace/cert-manager created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
+ kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; cert-manager &lt;span class="nb"&gt;wait &lt;/span&gt;deployment/cert-manager &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Available &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;300s
deployment.apps/cert-manager condition met
+ kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; cert-manager &lt;span class="nb"&gt;wait &lt;/span&gt;deployment/cert-manager-webhook &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Available &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;300s
deployment.apps/cert-manager-webhook condition met
+ kubectl apply &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'name!=ssp-operator,name!=hyperconverged-cluster-cli-download'&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; kubevirt-hyperconverged &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/cluster_role.yaml
role.rbac.authorization.k8s.io/cluster-network-addons-operator created
role.rbac.authorization.k8s.io/kubevirt-operator created
role.rbac.authorization.k8s.io/tekton-tasks-operator created
role.rbac.authorization.k8s.io/cdi-operator created
role.rbac.authorization.k8s.io/hostpath-provisioner-operator created
clusterrole.rbac.authorization.k8s.io/hyperconverged-cluster-operator created
clusterrole.rbac.authorization.k8s.io/cluster-network-addons-operator created
clusterrole.rbac.authorization.k8s.io/kubevirt-operator created
clusterrole.rbac.authorization.k8s.io/tekton-tasks-operator created
clusterrole.rbac.authorization.k8s.io/cdi-operator created
clusterrole.rbac.authorization.k8s.io/hostpath-provisioner-operator created
+ kubectl apply &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'name!=ssp-operator,name!=hyperconverged-cluster-cli-download'&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; kubevirt-hyperconverged &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/service_account.yaml
serviceaccount/cdi-operator created
serviceaccount/cluster-network-addons-operator created
serviceaccount/hostpath-provisioner-operator created
serviceaccount/hyperconverged-cluster-operator created
serviceaccount/kubevirt-operator created
serviceaccount/tekton-tasks-operator created
+ kubectl apply &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'name!=ssp-operator,name!=hyperconverged-cluster-cli-download'&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; kubevirt-hyperconverged &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/cluster_role_binding.yaml
rolebinding.rbac.authorization.k8s.io/cluster-network-addons-operator created
rolebinding.rbac.authorization.k8s.io/kubevirt-operator created
rolebinding.rbac.authorization.k8s.io/tekton-tasks-operator created
rolebinding.rbac.authorization.k8s.io/cdi-operator created
rolebinding.rbac.authorization.k8s.io/hostpath-provisioner-operator created
clusterrolebinding.rbac.authorization.k8s.io/hyperconverged-cluster-operator created
clusterrolebinding.rbac.authorization.k8s.io/cluster-network-addons-operator created
clusterrolebinding.rbac.authorization.k8s.io/kubevirt-operator created
clusterrolebinding.rbac.authorization.k8s.io/tekton-tasks-operator created
clusterrolebinding.rbac.authorization.k8s.io/cdi-operator created
clusterrolebinding.rbac.authorization.k8s.io/hostpath-provisioner-operator created
+ kubectl apply &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'name!=ssp-operator,name!=hyperconverged-cluster-cli-download'&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; kubevirt-hyperconverged &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/webhooks.yaml
issuer.cert-manager.io/selfsigned created
certificate.cert-manager.io/hyperconverged-cluster-webhook-service-cert created
validatingwebhookconfiguration.admissionregistration.k8s.io/validate-hco.kubevirt.io created
certificate.cert-manager.io/node-maintenance-operator-service-cert created
validatingwebhookconfiguration.admissionregistration.k8s.io/nodemaintenance-validation.kubevirt.io created
certificate.cert-manager.io/hostpath-provisioner-operator-webhook-service-cert created
validatingwebhookconfiguration.admissionregistration.k8s.io/hostpathprovisioner.kubevirt.io created
mutatingwebhookconfiguration.admissionregistration.k8s.io/mutate-hco.kubevirt.io created
+ kubectl apply &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'name!=ssp-operator,name!=hyperconverged-cluster-cli-download'&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; kubevirt-hyperconverged &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/operator.yaml
deployment.apps/hyperconverged-cluster-operator created
deployment.apps/hyperconverged-cluster-webhook created
deployment.apps/cluster-network-addons-operator created
deployment.apps/virt-operator created
deployment.apps/tekton-tasks-operator created
deployment.apps/cdi-operator created
deployment.apps/hostpath-provisioner-operator created
service/hyperconverged-cluster-webhook-service created
service/hostpath-provisioner-operator-service created
+ kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kubevirt-hyperconverged &lt;span class="nb"&gt;wait &lt;/span&gt;deployment/hyperconverged-cluster-webhook &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Available &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;300s
deployment.apps/hyperconverged-cluster-webhook condition met
+ kubectl apply &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'name!=ssp-operator,name!=hyperconverged-cluster-cli-download'&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; kubevirt-hyperconverged &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/hco.cr.yaml

hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;KubeVirt est alors déployé dans ce cluster :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po,svc &lt;span class="nt"&gt;-n&lt;/span&gt; kubevirt-hyperconverged

NAME READY STATUS RESTARTS AGE
pod/bridge-marker-n7dgz 1/1 Running 0 3m14s
pod/bridge-marker-qld66 1/1 Running 0 3m14s
pod/bridge-marker-w9rjv 1/1 Running 0 3m14s
pod/cdi-apiserver-5b79dbb75c-2wjgl 1/1 Running 0 2m41s
pod/cdi-deployment-68787cb469-l5cpl 1/1 Running 0 2m37s
pod/cdi-operator-b5fc58cd7-b9f2s 1/1 Running 0 3m24s
pod/cdi-uploadproxy-56d645d8d5-trt54 1/1 Running 0 2m46s
pod/cluster-network-addons-operator-6f6765958d-rn2fn 2/2 Running 0 3m35s
pod/hostpath-provisioner-operator-7bb46fd5b7-cg2hb 1/1 Running 0 3m24s
pod/hyperconverged-cluster-operator-7df6b469b4-942tx 1/1 Running 0 3m35s
pod/hyperconverged-cluster-webhook-55cfd6cb69-46r8v 1/1 Running 0 3m35s
pod/kube-cni-linux-bridge-plugin-62jrn 1/1 Running 0 3m14s
pod/kube-cni-linux-bridge-plugin-hqpsg 1/1 Running 0 3m14s
pod/kube-cni-linux-bridge-plugin-nzkpp 1/1 Running 0 3m14s
pod/kubemacpool-cert-manager-6dfd4bb587-b29pl 1/1 Running 0 3m14s
pod/kubemacpool-mac-controller-manager-646dd6f884-gxttc 2/2 Running 0 3m14s
pod/multus-ggncz 1/1 Running 0 3m15s
pod/multus-m2xjm 1/1 Running 0 3m15s
pod/multus-zjm8n 1/1 Running 0 3m15s
pod/tekton-tasks-operator-7d98668446-prr4b 1/1 Running 0 3m35s
pod/virt-api-94f856488-k7fcb 1/1 Running 0 2m20s
pod/virt-controller-7bd9bdbccc-rn762 1/1 Running 0 107s
pod/virt-exportproxy-5c8d7c5668-mwfnz 1/1 Running 0 107s
pod/virt-handler-mzjv5 1/1 Running 0 107s
pod/virt-operator-6797b9bb9c-5jqj8 1/1 Running 0 3m24s
pod/virt-operator-6797b9bb9c-rrm4n 1/1 Running 0 3m24s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
service/cdi-api ClusterIP 10.0.59.53 &amp;lt;none&amp;gt; 443/TCP 2m41s
service/cdi-prometheus-metrics ClusterIP 10.0.220.94 &amp;lt;none&amp;gt; 8080/TCP 2m37s
service/cdi-uploadproxy ClusterIP 10.0.37.12 &amp;lt;none&amp;gt; 443/TCP 2m46s
service/hostpath-provisioner-operator-service ClusterIP 10.0.118.100 &amp;lt;none&amp;gt; 9443/TCP 3m35s
service/hyperconverged-cluster-webhook-service ClusterIP 10.0.122.49 &amp;lt;none&amp;gt; 4343/TCP 3m35s
service/kubemacpool-service ClusterIP 10.0.96.143 &amp;lt;none&amp;gt; 443/TCP 3m14s
service/kubevirt-operator-webhook ClusterIP 10.0.145.178 &amp;lt;none&amp;gt; 443/TCP 2m22s
service/kubevirt-prometheus-metrics ClusterIP None &amp;lt;none&amp;gt; 443/TCP 2m22s
service/virt-api ClusterIP 10.0.230.208 &amp;lt;none&amp;gt; 443/TCP 2m22s
service/virt-exportproxy ClusterIP 10.0.218.176 &amp;lt;none&amp;gt; 443/TCP 2m22s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;KubeVirt Manager est mise en oeuvre via les manifests YAML fournis dans son dépôt GitHub :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git clone https://github.com/kubevirt-manager/kubevirt-manager
Cloning into &lt;span class="s1"&gt;'kubevirt-manager'&lt;/span&gt;...
remote: Enumerating objects: 4464, &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
remote: Counting objects: 100% &lt;span class="o"&gt;(&lt;/span&gt;1287/1287&lt;span class="o"&gt;)&lt;/span&gt;, &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
remote: Compressing objects: 100% &lt;span class="o"&gt;(&lt;/span&gt;446/446&lt;span class="o"&gt;)&lt;/span&gt;, &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
remote: Total 4464 &lt;span class="o"&gt;(&lt;/span&gt;delta 838&lt;span class="o"&gt;)&lt;/span&gt;, reused 1191 &lt;span class="o"&gt;(&lt;/span&gt;delta 815&lt;span class="o"&gt;)&lt;/span&gt;, pack-reused 3177
Receiving objects: 100% &lt;span class="o"&gt;(&lt;/span&gt;4464/4464&lt;span class="o"&gt;)&lt;/span&gt;, 67.53 MiB | 3.41 MiB/s, &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
Resolving deltas: 100% &lt;span class="o"&gt;(&lt;/span&gt;1461/1461&lt;span class="o"&gt;)&lt;/span&gt;, &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;

~/kubevirt-manager&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; kubernetes/ns.yaml
namespace/kubevirt-manager created

~/kubevirt-manager&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;kubernetes/ns.yaml 

apiVersion: v1
kind: Namespace
metadata:
  name: kubevirt-manager
  labels:
    app: kubevirt-manager
    kubevirt-manager.io/version: 1.2.0

~/kubevirt-manager&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; kubernetes/rbac.yaml

serviceaccount/kubevirt-manager created
clusterrole.rbac.authorization.k8s.io/kubevirt-manager created
clusterrolebinding.rbac.authorization.k8s.io/kubevirt-manager created

~/kubevirt-manager&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; kubernetes/deployment.yaml
deployment.apps/kubevirt-manager created

~/kubevirt-manager&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; kubernetes/pc.yaml
priorityclass.scheduling.k8s.io/vm-standard created
priorityclass.scheduling.k8s.io/vm-preemptible created

~/kubevirt-manager&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; kubernetes/service.yaml
service/kubevirt-manager created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;KubeVirt Manager est déployé et je change en éditant le service d’exposition de ClusterIP à Load Balancer pour obtenir une exposition en HTTP :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/services-networking/"&gt;Services, Load Balancing, and Networking&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po,svc &lt;span class="nt"&gt;-n&lt;/span&gt; kubevirt-manager
NAME READY STATUS RESTARTS AGE
pod/kubevirt-manager-5858499887-wfrgn 1/1 Running 0 95s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
service/kubevirt-manager ClusterIP 10.0.173.82 &amp;lt;none&amp;gt; 8080/TCP 63s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl edit &lt;span class="nt"&gt;-n&lt;/span&gt; kubevirt-manager service/kubevirt-manager
service/kubevirt-manager edited

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po,svc &lt;span class="nt"&gt;-n&lt;/span&gt; kubevirt-manager
NAME READY STATUS RESTARTS AGE
pod/kubevirt-manager-5858499887-wfrgn 1/1 Running 0 4m56s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
service/kubevirt-manager LoadBalancer 10.0.173.82 20.74.99.190 8080:30143/TCP 4m24s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fF9RkvKr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ArvtP2hJjeECi_uXc1VLMYg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fF9RkvKr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ArvtP2hJjeECi_uXc1VLMYg.png" alt="" width="880" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Et j’y accède via l’adresse IP publique fournie dans AKS :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6PuT9R1U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AXRl9eCZyS-rxOlWypQtaig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6PuT9R1U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AXRl9eCZyS-rxOlWypQtaig.png" alt="" width="880" height="620"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hs36eVJn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A9n6cTGqnMpBkpeFGWSAXTw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hs36eVJn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A9n6cTGqnMpBkpeFGWSAXTw.png" alt="" width="880" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LNUUOtHU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AKVDlUPqN_kYfRGl0f5cg6Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LNUUOtHU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AKVDlUPqN_kYfRGl0f5cg6Q.png" alt="" width="880" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Krew est installé localement via Kubectl ainsi que le plugin correspondant pour Kubevirt :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://krew.sigs.k8s.io/docs/user-guide/quickstart/"&gt;Quickstart&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl krew update
Updated the &lt;span class="nb"&gt;local &lt;/span&gt;copy of plugin index.

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl krew search virt
NAME DESCRIPTION INSTALLED
pv-migrate Migrate data across persistent volumes no
pvmigrate Migrates PVs between StorageClasses no
view-cert View certificate information stored &lt;span class="k"&gt;in &lt;/span&gt;secrets no
view-secret Decode Kubernetes secrets no
view-serviceaccount-kubeconfig Show a kubeconfig setting to access the apiserv... no
virt Control KubeVirt virtual machines using virtctl &lt;span class="nb"&gt;yes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ce qui permet de lancer directement une petite machine virtuelle avec Cirros :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/cirros-dev/cirros"&gt;GitHub - cirros-dev/cirros&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubevirt/demo/master/manifests/vm.yaml
virtualmachine.kubevirt.io/testvm created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe vm testvm
Name: testvm
Namespace: default
Labels: &amp;lt;none&amp;gt;
Annotations: kubevirt.io/latest-observed-api-version: v1
              kubevirt.io/storage-observed-api-version: v1alpha3
API Version: kubevirt.io/v1
Kind: VirtualMachine
Metadata:
  Creation Timestamp: 2023-03-09T13:23:32Z
  Finalizers:
    kubevirt.io/virtualMachineControllerFinalize
  Generation: 1
  Managed Fields:
    API Version: kubevirt.io/v1alpha3
    Fields Type: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:kubevirt.io/latest-observed-api-version:
          f:kubevirt.io/storage-observed-api-version:
        f:finalizers:
          .:
          v:&lt;span class="s2"&gt;"kubevirt.io/virtualMachineControllerFinalize"&lt;/span&gt;:
    Manager: Go-http-client
    Operation: Update
    Time: 2023-03-09T13:23:32Z
    API Version: kubevirt.io/v1alpha3
    Fields Type: FieldsV1
    fieldsV1:
      f:status:
        .:
        f:conditions:
        f:printableStatus:
        f:volumeSnapshotStatuses:
    Manager: Go-http-client
    Operation: Update
    Subresource: status
    Time: 2023-03-09T13:23:32Z
    API Version: kubevirt.io/v1alpha3
    Fields Type: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:running:
        f:template:
          .:
          f:metadata:
            .:
            f:labels:
              .:
              f:kubevirt.io/domain:
              f:kubevirt.io/size:
          f:spec:
            .:
            f:domain:
              .:
              f:devices:
                .:
                f:disks:
                f:interfaces:
              f:resources:
                .:
                f:requests:
                  .:
                  f:memory:
            f:networks:
            f:volumes:
    Manager: kubectl-client-side-apply
    Operation: Update
    Time: 2023-03-09T13:23:32Z
  Resource Version: 39937
  UID: 01de5614-922a-4ae8-9bec-40eeb9a7c500
Spec:
  Running: &lt;span class="nb"&gt;false
  &lt;/span&gt;Template:
    Metadata:
      Creation Timestamp: &amp;lt;nil&amp;gt;
      Labels:
        kubevirt.io/domain: testvm
        kubevirt.io/size: small
    Spec:
      Domain:
        Devices:
          Disks:
            Disk:
              Bus: virtio
            Name: rootfs
            Disk:
              Bus: virtio
            Name: cloudinit
          Interfaces:
            Mac Address: 02:9d:e4:00:00:00
            Masquerade:
            Name: default
        Machine:
          Type: q35
        Resources:
          Requests:
            Memory: 64M
      Networks:
        Name: default
        Pod:
      Volumes:
        Container Disk:
          Image: kubevirt/cirros-registry-disk-demo
        Name: rootfs
        Cloud Init No Cloud:
          userDataBase64: &lt;span class="nv"&gt;SGkuXG4&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
        Name: cloudinit
Status:
  Conditions:
    Last Probe Time: 2023-03-09T13:23:32Z
    Last Transition Time: 2023-03-09T13:23:32Z
    Message: VMI does not exist
    Reason: VMINotExists
    Status: False
    Type: Ready
  Printable Status: Stopped
  Volume Snapshot Statuses:
    Enabled: &lt;span class="nb"&gt;false
    &lt;/span&gt;Name: rootfs
    Reason: Snapshot is not supported &lt;span class="k"&gt;for &lt;/span&gt;this volumeSource &lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;rootfs]
    Enabled: &lt;span class="nb"&gt;false
    &lt;/span&gt;Name: cloudinit
    Reason: Snapshot is not supported &lt;span class="k"&gt;for &lt;/span&gt;this volumeSource &lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;cloudinit]
Events: &amp;lt;none&amp;gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl virt start testvm
VM testvm was scheduled to start

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe vmi testvm
Name: testvm
Namespace: default
Labels: kubevirt.io/domain&lt;span class="o"&gt;=&lt;/span&gt;testvm
              kubevirt.io/size&lt;span class="o"&gt;=&lt;/span&gt;small
Annotations: kubevirt.io/latest-observed-api-version: v1
              kubevirt.io/storage-observed-api-version: v1alpha3
API Version: kubevirt.io/v1
Kind: VirtualMachineInstance
Metadata:
  Creation Timestamp: 2023-03-09T13:26:14Z
  Finalizers:
    kubevirt.io/virtualMachineControllerFinalize
    foregroundDeleteVirtualMachine
  Generation: 4
  Managed Fields:
    API Version: kubevirt.io/v1alpha3
    Fields Type: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubevirt.io/latest-observed-api-version:
          f:kubevirt.io/storage-observed-api-version:
        f:finalizers:
          .:
          v:&lt;span class="s2"&gt;"kubevirt.io/virtualMachineControllerFinalize"&lt;/span&gt;:
        f:labels:
          .:
          f:kubevirt.io/domain:
          f:kubevirt.io/size:
        f:ownerReferences:
          .:
          k:&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"uid"&lt;/span&gt;:&lt;span class="s2"&gt;"01de5614-922a-4ae8-9bec-40eeb9a7c500"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;:
      f:spec:
        .:
        f:domain:
          .:
          f:devices:
            .:
            f:disks:
            f:interfaces:
          f:firmware:
            .:
            f:uuid:
          f:machine:
            .:
            f:type:
          f:resources:
            .:
            f:requests:
              .:
              f:memory:
        f:networks:
        f:volumes:
      f:status:
        .:
        f:activePods:
          .:
          f:f2afda42-6f8f-4bad-b35a-7591c3233cc1:
        f:conditions:
        f:guestOSInfo:
        f:phase:
        f:phaseTransitionTimestamps:
        f:qosClass:
        f:runtimeUser:
        f:virtualMachineRevisionName:
    Manager: Go-http-client
    Operation: Update
    Time: 2023-03-09T13:26:14Z
  Owner References:
    API Version: kubevirt.io/v1
    Block Owner Deletion: &lt;span class="nb"&gt;true
    &lt;/span&gt;Controller: &lt;span class="nb"&gt;true
    &lt;/span&gt;Kind: VirtualMachine
    Name: testvm
    UID: 01de5614-922a-4ae8-9bec-40eeb9a7c500
  Resource Version: 41728
  UID: 745caf89-6e75-47d1-b094-91e6119f3cc4
Spec:
  Domain:
    Cpu:
      Cores: 1
      Model: host-model
      Sockets: 1
      Threads: 1
    Devices:
      Disks:
        Disk:
          Bus: virtio
        Name: rootfs
        Disk:
          Bus: virtio
        Name: cloudinit
      Interfaces:
        Mac Address: 02:9d:e4:00:00:00
        Masquerade:
        Name: default
    Features:
      Acpi:
        Enabled: &lt;span class="nb"&gt;true
    &lt;/span&gt;Firmware:
      Uuid: 5a9fc181-957e-5c32-9e5a-2de5e9673531
    Machine:
      Type: q35
    Resources:
      Requests:
        Memory: 64M
  Networks:
    Name: default
    Pod:
  Volumes:
    Container Disk:
      Image: kubevirt/cirros-registry-disk-demo
      Image Pull Policy: Always
    Name: rootfs
    Cloud Init No Cloud:
      userDataBase64: &lt;span class="nv"&gt;SGkuXG4&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
    Name: cloudinit
Status:
  Active Pods:
    f2afda42-6f8f-4bad-b35a-7591c3233cc1: aks-agentpool-16963728-vmss000000
  Conditions:
    Last Probe Time: 2023-03-09T13:26:14Z
    Last Transition Time: 2023-03-09T13:26:14Z
    Message: Guest VM is not reported as running
    Reason: GuestNotRunning
    Status: False
    Type: Ready
  Guest OS Info:
  Phase: Scheduling
  Phase Transition Timestamps:
    Phase: Pending
    Phase Transition Timestamp: 2023-03-09T13:26:14Z
    Phase: Scheduling
    Phase Transition Timestamp: 2023-03-09T13:26:14Z
  Qos Class: Burstable
  Runtime User: 107
  Virtual Machine Revision Name: revision-start-vm-01de5614-922a-4ae8-9bec-40eeb9a7c500-2
Events:
  Type Reason Age From Message
  &lt;span class="nt"&gt;----&lt;/span&gt; &lt;span class="nt"&gt;------&lt;/span&gt; &lt;span class="nt"&gt;----&lt;/span&gt; &lt;span class="nt"&gt;----&lt;/span&gt; &lt;span class="nt"&gt;-------&lt;/span&gt;
  Normal SuccessfulCreate 8s virtualmachine-controller Created virtual machine pod virt-launcher-testvm-b77hf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Elle est en exécution dans KubeVirt :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get vmis
NAME AGE PHASE IP NODENAME READY
testvm 61s Running 10.244.2.38 aks-agentpool-16963728-vmss000000 True

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get vms
NAME AGE STATUS READY
testvm 3m47s Running True
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hfoT-Hib--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ap32nJgYeypDrJwfcc3budQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hfoT-Hib--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ap32nJgYeypDrJwfcc3budQ.png" alt="" width="880" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;avec sa console VNC fournie dans KubeVirt Manager :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nxRp5lNP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AiHlp8om-_htJo09lWxkajw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nxRp5lNP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AiHlp8om-_htJo09lWxkajw.png" alt="" width="880" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tkpouq9R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AgzKOMtiGOvfZNzhS-Th2Bg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tkpouq9R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AgzKOMtiGOvfZNzhS-Th2Bg.png" alt="" width="880" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--saynoOod--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ANqQOylWFSEQLdjzQ7t1YwA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--saynoOod--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ANqQOylWFSEQLdjzQ7t1YwA.png" alt="" width="880" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kriGlvt4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AZ3xImKAQlZvXRZ9HZubnMQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kriGlvt4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AZ3xImKAQlZvXRZ9HZubnMQ.png" alt="" width="880" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Bien évidemment, je pouvais également accéder à la console de la machine virtuelle localement :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl virt console testvm

testvm login: cirros
Password: gocubsgo

&lt;span class="nv"&gt;$ &lt;/span&gt;ps aux
PID USER COMMAND
    1 root init
    2 root &lt;span class="o"&gt;[&lt;/span&gt;kthreadd]
    3 root &lt;span class="o"&gt;[&lt;/span&gt;ksoftirqd/0]
    4 root &lt;span class="o"&gt;[&lt;/span&gt;kworker/0:0]
    5 root &lt;span class="o"&gt;[&lt;/span&gt;kworker/0:0H]
    7 root &lt;span class="o"&gt;[&lt;/span&gt;rcu_sched]
    8 root &lt;span class="o"&gt;[&lt;/span&gt;rcu_bh]
    9 root &lt;span class="o"&gt;[&lt;/span&gt;migration/0]
   10 root &lt;span class="o"&gt;[&lt;/span&gt;watchdog/0]
   11 root &lt;span class="o"&gt;[&lt;/span&gt;kdevtmpfs]
   12 root &lt;span class="o"&gt;[&lt;/span&gt;netns]
   13 root &lt;span class="o"&gt;[&lt;/span&gt;perf]
   14 root &lt;span class="o"&gt;[&lt;/span&gt;khungtaskd]
   15 root &lt;span class="o"&gt;[&lt;/span&gt;writeback]
   16 root &lt;span class="o"&gt;[&lt;/span&gt;ksmd]
   17 root &lt;span class="o"&gt;[&lt;/span&gt;crypto]
   18 root &lt;span class="o"&gt;[&lt;/span&gt;kintegrityd]
   19 root &lt;span class="o"&gt;[&lt;/span&gt;bioset]
   20 root &lt;span class="o"&gt;[&lt;/span&gt;kblockd]
   21 root &lt;span class="o"&gt;[&lt;/span&gt;ata_sff]
   22 root &lt;span class="o"&gt;[&lt;/span&gt;md]
   23 root &lt;span class="o"&gt;[&lt;/span&gt;devfreq_wq]La sup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;L’interruption peut se faire graphiquement ou via la ligne de commande habituelle :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl virt stop testvm
VM testvm was scheduled to stop

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete vm testvm
virtualmachine.kubevirt.io &lt;span class="s2"&gt;"testvm"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FwbWiqR9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A_rBQ71ciolH9dOHlrAozTQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FwbWiqR9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A_rBQ71ciolH9dOHlrAozTQ.png" alt="" width="880" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cLep2StZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ASGMO6L4ytBLg8qdzEiicPA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cLep2StZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ASGMO6L4ytBLg8qdzEiicPA.png" alt="" width="880" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Avec les options graphiques, il serait possible de reproduire le schéma de création de machine virtuelle via “Containerized Data Importer (CDI)” …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6_nQ80T---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A5D6mb2HPMDu3jVC0" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6_nQ80T---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A5D6mb2HPMDu3jVC0" alt="" width="880" height="648"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubevirt.io/labs/kubernetes/lab2.html"&gt;Experiment with CDI | KubeVirt.io&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;KubeVirt Manager offre également une intégration avec Prometheus (et donc potentiellement à Grafana).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kubevirt-manager/kubevirt-manager#prometheus-integration"&gt;GitHub - kubevirt-manager/kubevirt-manager: Kubevirt Web UI / Web Interface Manager&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ce projet encore jeune est en toujours développement et des fonctionnalités plus avancées sur le contrôle des machines virtuelles sont à espérer …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jcG_uVLk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1000/0%2AYmVyW6IaiUM5JIJ_.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jcG_uVLk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1000/0%2AYmVyW6IaiUM5JIJ_.jpg" alt="" width="880" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;À suivre !&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>azure</category>
      <category>virtualisation</category>
    </item>
    <item>
      <title>Empaqueter JupyterLite pour être déployé sur la plateforme globale d’applications Fly.io …</title>
      <dc:creator>Karim</dc:creator>
      <pubDate>Wed, 01 Mar 2023 23:37:04 +0000</pubDate>
      <link>https://forem.com/deep75/empaqueter-jupyterlite-pour-etre-deploye-sur-la-plateforme-globale-dapplications-flyio--2gdh</link>
      <guid>https://forem.com/deep75/empaqueter-jupyterlite-pour-etre-deploye-sur-la-plateforme-globale-dapplications-flyio--2gdh</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c7LJYDyH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A-aryh-0JkriAcYZr" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c7LJYDyH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2A-aryh-0JkriAcYZr" alt="" width="880" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fly.io (que certains présentent comme la principale alternative à Heroku) est, comme beaucoup de plateformes d’infrastructure, un système complexe construit sur un certain nombre de projets open source avec notamment Nomad, Consul et Vault de Hashicorp.&lt;/p&gt;

&lt;p&gt;Au cœur de l’exécution des applications se trouve une variante de Firecracker d’Amazon. Et une surveillance avec des tableaux de bord utilisant Prometheus et Grafana.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/deep75/load-balancing-avec-haproxy-nomad-et-consul--1ga4"&gt;Load Balancing avec HAProxy, Nomad et Consul …&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yU98SxKx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/612/0%2AF7VA1n00dyBDwGH4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yU98SxKx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/612/0%2AF7VA1n00dyBDwGH4.jpg" alt="" width="612" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://fly.io/docs/about/open-source/"&gt;Fly and Open Source&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fly.io utilise des microVMs Firecracker. Il s’agit de machines virtuelles légères et sécurisées basées sur une virtualisation matérielle forte.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://deep75.medium.com/des-clusters-kubernetes-imbriqu%C3%A9s-avec-ignite-firecracker-containerd-kind-et-rancher-34e5e92c257b"&gt;Des clusters Kubernetes imbriqués avec Ignite, Firecracker, Containerd, Kind et Rancher …&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/deep75/kubefire-creer-et-gerer-des-clusters-kubernetes-en-utilisant-des-microvms-avec-firecracker--2jge"&gt;KubeFire : Créer et gèrer des clusters Kubernetes en utilisant des microVMs avec Firecracker …&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://deep75.medium.com/serverless-un-rapide-aper%C3%A7u-de-firecracker-dans-azure-via-de-la-virtualisation-imbriqu%C3%A9e-ef62a36147b3"&gt;Serverless : un rapide aperçu de Firecracker dans Azure via de la virtualisation imbriquée …&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Les machines Fly sont des VM Firecracker dotées d’une API REST rapide qui peuvent démarrer des instances en 300 ms environ, dans toutes les régions prises en charge par Fly.io.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://fly.io/docs/machines/guides-examples/functions-with-machines/"&gt;Run User Code on Fly Machines&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Une des utilisations des machines est d’exécuter du code utilisateur pour étendre votre service, ou comme un service en soi. On appelle parfois cela “Functions-as-a-Service” (FaaS). L’idée générale est la suivante :&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;emballer le code des utilisateurs&lt;/p&gt;

&lt;p&gt;définir un environnement d’exécution&lt;/p&gt;

&lt;p&gt;lancer le code dans une VM&lt;/p&gt;

&lt;p&gt;puis éteindre la VM lorsqu’elle est inactive afin d’économiser sur les factures de calcul.&lt;/p&gt;

&lt;p&gt;Le tout en facturant les utilisateurs à la demande ! …&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Je vais m’interesser ici au projet JupyterLite. JupyterLite est en effet une distribution JupyterLab qui s’exécute entièrement dans le navigateur, construite de A à Z à partir de composants et d’extensions JupyterLab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://jupyterlite.readthedocs.io/en/latest/"&gt;JupyterLite - JupyterLite 0.1.0-beta.18 documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Bien que JupyterLite soit actuellement développé par les développeurs principaux de Jupyter, le projet est encore non officiel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://jupyterlite.readthedocs.io/en/latest/howto/index.html"&gt;How-To Guides - JupyterLite 0.1.0-beta.18 documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pour cela, dans un premier temps, récupération de Mambaforge :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/conda-forge/miniforge#mambaforge"&gt;GitHub - conda-forge/miniforge: A conda-forge distribution.&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@julia50b1000001:~&lt;span class="nv"&gt;$ &lt;/span&gt;wget &lt;span class="nt"&gt;-c&lt;/span&gt; https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-Linux-x86_64.sh
ubuntu@julia50b1000001:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x Mambaforge-Linux-x86_64.sh 
ubuntu@julia50b1000001:~&lt;span class="nv"&gt;$ &lt;/span&gt;./Mambaforge-Linux-x86_64.sh 

Welcome to Mambaforge 22.11.1-4

In order to &lt;span class="k"&gt;continue &lt;/span&gt;the installation process, please review the license
agreement.
Please, press ENTER to &lt;span class="k"&gt;continue&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; 
Miniforge installer code uses BSD-3-Clause license as stated below.
Installing base environment...
Transaction finished
installation finished.
Do you wish the installer to initialize Mambaforge
by running conda init? &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;|no]
&lt;span class="o"&gt;[&lt;/span&gt;no] &lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;yes
&lt;/span&gt;no change /home/ubuntu/mambaforge/condabin/conda
no change /home/ubuntu/mambaforge/bin/conda
no change /home/ubuntu/mambaforge/bin/conda-env
no change /home/ubuntu/mambaforge/bin/activate
no change /home/ubuntu/mambaforge/bin/deactivate
no change /home/ubuntu/mambaforge/etc/profile.d/conda.sh
no change /home/ubuntu/mambaforge/etc/fish/conf.d/conda.fish
no change /home/ubuntu/mambaforge/shell/condabin/Conda.psm1
no change /home/ubuntu/mambaforge/shell/condabin/conda-hook.ps1
no change /home/ubuntu/mambaforge/lib/python3.10/site-packages/xontrib/conda.xsh
no change /home/ubuntu/mambaforge/etc/profile.d/conda.csh
modified /home/ubuntu/.bashrc

&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; For changes to take effect, close and re-open your current shell. &amp;lt;&lt;span class="o"&gt;==&lt;/span&gt;

                  ____  ____
                 / &lt;span class="se"&gt;\ &lt;/span&gt;/ &lt;span class="se"&gt;\ &lt;/span&gt;/ &lt;span class="se"&gt;\ &lt;/span&gt;/ &lt;span class="se"&gt;\&lt;/span&gt;
                / &lt;span class="se"&gt;\/&lt;/span&gt; &lt;span class="se"&gt;\/&lt;/span&gt; &lt;span class="se"&gt;\/&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
███████████████/ /██/ /██/ /██/ /████████████████████████
              / / &lt;span class="se"&gt;\ &lt;/span&gt;/ &lt;span class="se"&gt;\ &lt;/span&gt;/ &lt;span class="se"&gt;\ &lt;/span&gt;/ &lt;span class="se"&gt;\ \ &lt;/span&gt;____
             / / &lt;span class="se"&gt;\_&lt;/span&gt;/ &lt;span class="se"&gt;\_&lt;/span&gt;/ &lt;span class="se"&gt;\_&lt;/span&gt;/ &lt;span class="se"&gt;\ &lt;/span&gt;o &lt;span class="se"&gt;\_&lt;/span&gt;_,
            / _/ &lt;span class="se"&gt;\ &lt;/span&gt;_____ / &lt;span class="sb"&gt;`&lt;/span&gt;
            |/
        ███╗ ███╗ █████╗ ███╗ ███╗██████╗ █████╗
        ████╗ ████║██╔══██╗████╗ ████║██╔══██╗██╔══██╗
        ██╔████╔██║███████║██╔████╔██║██████╔╝███████║
        ██║╚██╔╝██║██╔══██║██║╚██╔╝██║██╔══██╗██╔══██║
        ██║ ╚═╝ ██║██║ ██║██║ ╚═╝ ██║██████╔╝██║ ██║
        ╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═╝╚═════╝ ╚═╝ ╚═╝

        mamba &lt;span class="o"&gt;(&lt;/span&gt;1.1.0&lt;span class="o"&gt;)&lt;/span&gt; supported by @QuantStack

        GitHub: https://github.com/mamba-org/mamba
        Twitter: https://twitter.com/QuantStack

█████████████████████████████████████████████████████████████

no change /home/ubuntu/mambaforge/condabin/conda
no change /home/ubuntu/mambaforge/bin/conda
no change /home/ubuntu/mambaforge/bin/conda-env
no change /home/ubuntu/mambaforge/bin/activate
no change /home/ubuntu/mambaforge/bin/deactivate
no change /home/ubuntu/mambaforge/etc/profile.d/conda.sh
no change /home/ubuntu/mambaforge/etc/fish/conf.d/conda.fish
no change /home/ubuntu/mambaforge/shell/condabin/Conda.psm1
no change /home/ubuntu/mambaforge/shell/condabin/conda-hook.ps1
no change /home/ubuntu/mambaforge/lib/python3.10/site-packages/xontrib/conda.xsh
no change /home/ubuntu/mambaforge/etc/profile.d/conda.csh
no change /home/ubuntu/.bashrc
No action taken.
Added mamba to /home/ubuntu/.bashrc

&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; For changes to take effect, close and re-open your current shell. &amp;lt;&lt;span class="o"&gt;==&lt;/span&gt;

If you&lt;span class="s1"&gt;'d prefer that conda'&lt;/span&gt;s base environment not be activated on startup, 
   &lt;span class="nb"&gt;set &lt;/span&gt;the auto_activate_base parameter to &lt;span class="nb"&gt;false&lt;/span&gt;: 

conda config &lt;span class="nt"&gt;--set&lt;/span&gt; auto_activate_base &lt;span class="nb"&gt;false

&lt;/span&gt;Thank you &lt;span class="k"&gt;for &lt;/span&gt;installing Mambaforge!
ubuntu@julia50b1000001:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;source&lt;/span&gt; .bashrc
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@julia50b1000001:~&lt;span class="nv"&gt;$ &lt;/span&gt;mamba
usage: mamba &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;-h&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;-V&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="nb"&gt;command&lt;/span&gt; ...

conda is a tool &lt;span class="k"&gt;for &lt;/span&gt;managing and deploying applications, environments and packages.

Options:

positional arguments:
  &lt;span class="nb"&gt;command
    &lt;/span&gt;clean Remove unused packages and caches.
    compare Compare packages between conda environments.
    config Modify configuration values &lt;span class="k"&gt;in&lt;/span&gt; .condarc. This is modeled after the git config command. Writes to the user .condarc file &lt;span class="o"&gt;(&lt;/span&gt;/home/ubuntu/.condarc&lt;span class="o"&gt;)&lt;/span&gt; by default.
                      Use the &lt;span class="nt"&gt;--show-sources&lt;/span&gt; flag to display all identified configuration locations on your computer.
    create Create a new conda environment from a list of specified packages.
    info Display information about current conda install.
    init Initialize conda &lt;span class="k"&gt;for &lt;/span&gt;shell interaction.
    &lt;span class="nb"&gt;install &lt;/span&gt;Installs a list of packages into a specified conda environment.
    list List installed packages &lt;span class="k"&gt;in &lt;/span&gt;a conda environment.
    package Low-level conda package utility. &lt;span class="o"&gt;(&lt;/span&gt;EXPERIMENTAL&lt;span class="o"&gt;)&lt;/span&gt;
    remove &lt;span class="o"&gt;(&lt;/span&gt;uninstall&lt;span class="o"&gt;)&lt;/span&gt;
                      Remove a list of packages from a specified conda environment.
    rename Renames an existing environment.
    run Run an executable &lt;span class="k"&gt;in &lt;/span&gt;a conda environment.
    search Search &lt;span class="k"&gt;for &lt;/span&gt;packages and display associated information.The input is a MatchSpec, a query language &lt;span class="k"&gt;for &lt;/span&gt;conda packages. See examples below.
    update &lt;span class="o"&gt;(&lt;/span&gt;upgrade&lt;span class="o"&gt;)&lt;/span&gt; Updates conda packages to the latest compatible version.
    notices Retrieves latest channel notifications.
    repoquery Query repositories using mamba.

options:
  &lt;span class="nt"&gt;-h&lt;/span&gt;, &lt;span class="nt"&gt;--help&lt;/span&gt; Show this &lt;span class="nb"&gt;help &lt;/span&gt;message and exit.
  &lt;span class="nt"&gt;-V&lt;/span&gt;, &lt;span class="nt"&gt;--version&lt;/span&gt; Show the conda version number and exit.

conda commands available from other packages &lt;span class="o"&gt;(&lt;/span&gt;legacy&lt;span class="o"&gt;)&lt;/span&gt;:
  &lt;span class="nb"&gt;env

                                           &lt;/span&gt;__
          ________  _______  ________ / /_ ____ _
         / / / / __&lt;span class="sb"&gt;`&lt;/span&gt;__ &lt;span class="se"&gt;\/&lt;/span&gt; __&lt;span class="sb"&gt;`&lt;/span&gt;/__ &lt;span class="sb"&gt;`&lt;/span&gt; __&lt;span class="se"&gt;\/&lt;/span&gt;__ &lt;span class="se"&gt;\/&lt;/span&gt; __ &lt;span class="sb"&gt;`&lt;/span&gt;/
        / /_/ / / / / / / /_/ / / / / / / /_/ / /_/ /
       / &lt;span class="nb"&gt;.&lt;/span&gt; ___/_/ /_/ /_/&lt;span class="se"&gt;\_&lt;/span&gt;_ ,_/_/ /_/ /_/_. ___/&lt;span class="se"&gt;\_&lt;/span&gt;_ ,_/
      /_/

Transaction

  Prefix: /home/ubuntu/mambaforge

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Et installation de JupyterLite avec Pip (il sera bientôt disponible dans conda forge) :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/jupyterlite/jupyterlite"&gt;GitHub - jupyterlite/jupyterlite: Wasm powered Jupyter running in the browser 💡&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@julia50b1000001:~&lt;span class="nv"&gt;$ &lt;/span&gt;python &lt;span class="nt"&gt;-m&lt;/span&gt; pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--pre&lt;/span&gt; jupyterlite

Collecting jupyterlite
  Downloading jupyterlite-0.1.0b18-py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;7.6 MB&lt;span class="o"&gt;)&lt;/span&gt;
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.6/7.6 MB 71.3 MB/s eta 0:00:00
Collecting doit&amp;lt;1,&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;0.34
  Downloading doit-0.36.0-py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;85 kB&lt;span class="o"&gt;)&lt;/span&gt;
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 85.9/85.9 kB 17.4 MB/s eta 0:00:00
Collecting jupyter_core&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;4.7
  Downloading jupyter_core-5.2.0-py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;94 kB&lt;span class="o"&gt;)&lt;/span&gt;
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 94.3/94.3 kB 19.5 MB/s eta 0:00:00
Collecting cloudpickle
  Downloading cloudpickle-2.2.1-py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;25 kB&lt;span class="o"&gt;)&lt;/span&gt;
Collecting importlib-metadata&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;4.4
  Downloading importlib_metadata-6.0.0-py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;21 kB&lt;span class="o"&gt;)&lt;/span&gt;
Collecting platformdirs&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;2.5
  Downloading platformdirs-3.0.0-py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;14 kB&lt;span class="o"&gt;)&lt;/span&gt;
Collecting traitlets&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;5.3
  Downloading traitlets-5.9.0-py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;117 kB&lt;span class="o"&gt;)&lt;/span&gt;
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 117.4/117.4 kB 21.0 MB/s eta 0:00:00
Collecting zipp&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;0.5
  Downloading zipp-3.15.0-py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;6.8 kB&lt;span class="o"&gt;)&lt;/span&gt;
Installing collected packages: zipp, traitlets, platformdirs, cloudpickle, jupyter_core, importlib-metadata, doit, jupyterlite
Successfully installed cloudpickle-2.2.1 doit-0.36.0 importlib-metadata-6.0.0 jupyter_core-5.2.0 jupyterlite-0.1.0b18 platformdirs-3.0.0 traitlets-5.9.0 zipp-3.15.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Installation de quelques Kernels en accompagnement :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(base) ubuntu@julia50b1000001:~$ pip install jupyterlite-xeus-python
Collecting jupyterlite-xeus-python
  Downloading jupyterlite_xeus_python-0.6.3-py3-none-any.whl (18.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.3/18.3 MB 4.0 MB/s eta 0:00:00
Requirement already satisfied: jupyterlite in ./mambaforge/lib/python3.10/site-packages (from jupyterlite-xeus-python) (0.1.0b18)
Collecting typer
  Downloading typer-0.7.0-py3-none-any.whl (38 kB)
Collecting empack&amp;lt;3,&amp;gt;=2.0.9
  Downloading empack-2.0.9-py3-none-any.whl (15 kB)
Requirement already satisfied: requests in ./mambaforge/lib/python3.10/site-packages (from jupyterlite-xeus-python) (2.28.2)
Requirement already satisfied: traitlets in ./mambaforge/lib/python3.10/site-packages (from jupyterlite-xeus-python) (5.9.0)
Collecting pyyaml
  Downloading PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (682 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 682.2/682.2 kB 67.7 MB/s eta 0:00:00
Collecting networkx
  Downloading networkx-3.0-py3-none-any.whl (2.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 73.2 MB/s eta 0:00:00
Collecting appdirs
  Downloading appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB)
Collecting pydantic
  Downloading pydantic-1.10.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.1/3.1 MB 116.2 MB/s eta 0:00:00
Requirement already satisfied: jupyter_core&amp;gt;=4.7 in ./mambaforge/lib/python3.10/site-packages (from jupyterlite-&amp;gt;jupyterlite-xeus-python) (5.2.0)
Requirement already satisfied: doit&amp;lt;1,&amp;gt;=0.34 in ./mambaforge/lib/python3.10/site-packages (from jupyterlite-&amp;gt;jupyterlite-xeus-python) (0.36.0)
Requirement already satisfied: urllib3&amp;lt;1.27,&amp;gt;=1.21.1 in ./mambaforge/lib/python3.10/site-packages (from requests-&amp;gt;jupyterlite-xeus-python) (1.26.14)
Requirement already satisfied: idna&amp;lt;4,&amp;gt;=2.5 in ./mambaforge/lib/python3.10/site-packages (from requests-&amp;gt;jupyterlite-xeus-python) (3.4)
Requirement already satisfied: charset-normalizer&amp;lt;4,&amp;gt;=2 in ./mambaforge/lib/python3.10/site-packages (from requests-&amp;gt;jupyterlite-xeus-python) (2.1.1)
Requirement already satisfied: certifi&amp;gt;=2017.4.17 in ./mambaforge/lib/python3.10/site-packages (from requests-&amp;gt;jupyterlite-xeus-python) (2022.12.7)
Collecting click&amp;lt;9.0.0,&amp;gt;=7.1.1
  Downloading click-8.1.3-py3-none-any.whl (96 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 96.6/96.6 kB 19.1 MB/s eta 0:00:00
Requirement already satisfied: cloudpickle in ./mambaforge/lib/python3.10/site-packages (from doit&amp;lt;1,&amp;gt;=0.34-&amp;gt;jupyterlite-&amp;gt;jupyterlite-xeus-python) (2.2.1)
Requirement already satisfied: importlib-metadata&amp;gt;=4.4 in ./mambaforge/lib/python3.10/site-packages (from doit&amp;lt;1,&amp;gt;=0.34-&amp;gt;jupyterlite-&amp;gt;jupyterlite-xeus-python) (6.0.0)
Requirement already satisfied: platformdirs&amp;gt;=2.5 in ./mambaforge/lib/python3.10/site-packages (from jupyter_core&amp;gt;=4.7-&amp;gt;jupyterlite-&amp;gt;jupyterlite-xeus-python) (3.0.0)
Collecting typing-extensions&amp;gt;=4.2.0
  Downloading typing_extensions-4.5.0-py3-none-any.whl (27 kB)
Requirement already satisfied: zipp&amp;gt;=0.5 in ./mambaforge/lib/python3.10/site-packages (from importlib-metadata&amp;gt;=4.4-&amp;gt;doit&amp;lt;1,&amp;gt;=0.34-&amp;gt;jupyterlite-&amp;gt;jupyterlite-xeus-python) (3.15.0)
Installing collected packages: appdirs, typing-extensions, pyyaml, networkx, click, typer, pydantic, empack, jupyterlite-xeus-python
Successfully installed appdirs-1.4.4 click-8.1.3 empack-2.0.9 jupyterlite-xeus-python-0.6.3 networkx-3.0 pydantic-1.10.5 pyyaml-6.0 typer-0.7.0 typing-extensions-4.5.0
(base) ubuntu@julia50b1000001:~$ pip install jupyterlite-p5-kernel
Collecting jupyterlite-p5-kernel
  Downloading jupyterlite_p5_kernel-0.1.1-py3-none-any.whl (21 kB)
Installing collected packages: jupyterlite-p5-kernel
Successfully installed jupyterlite-p5-kernel-0.1.1
(base) ubuntu@julia50b1000001:~$ pip install jupyterlite_xeus_lua
Collecting jupyterlite_xeus_lua
  Downloading jupyterlite_xeus_lua-0.3.3-py3-none-any.whl (2.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.2/2.2 MB 2.6 MB/s eta 0:00:00
Installing collected packages: jupyterlite_xeus_lua
Successfully installed jupyterlite_xeus_lua-0.3.3
(base) ubuntu@julia50b1000001:~$ pip install jupyterlite_xeus_wren
Collecting jupyterlite_xeus_wren
  Downloading jupyterlite_xeus_wren-0.2.1-py3-none-any.whl (4.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.2/4.2 MB 3.9 MB/s eta 0:00:00
Collecting jupyter-wren-syntax
  Downloading jupyter_wren_syntax-0.1.1-py3-none-any.whl (68 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 68.6/68.6 kB 8.8 MB/s eta 0:00:00
Installing collected packages: jupyter-wren-syntax, jupyterlite_xeus_wren
Successfully installed jupyter-wren-syntax-0.1.1 jupyterlite_xeus_wren-0.2.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Et génération du contenu statique dans un répertoire contenant JupyterLite et ses extensions via cette commande :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@julia50b1000001:~&lt;span class="nv"&gt;$ &lt;/span&gt;jupyter lite build &lt;span class="nt"&gt;--output-dir&lt;/span&gt; dist

&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@julia50b1000001:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls &lt;/span&gt;dist/
api config-utils.js icon-120x120.png jupyter-lite.ipynb kernelspecs package.json retro
bootstrap.js doc icon-512x512.png jupyter-lite.json lab piplite.schema.v0.json service-worker-b2fb40a.js
build extensions index.html jupyterlite.schema.v0.json manifest.webmanifest repl tree
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Maintenant que les actifs statiques ont été construits, vous pouvez utiliser un serveur HTTP simple pour les servir et accéder à JupyterLite depuis un navigateur web.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;La commande &lt;strong&gt;&lt;em&gt;jupyter lite serve&lt;/em&gt;&lt;/strong&gt; offre soit un serveur web propulsé par le module &lt;strong&gt;&lt;em&gt;http.server&lt;/em&gt;&lt;/strong&gt; intégré de Python, soit &lt;strong&gt;&lt;em&gt;Tornado&lt;/em&gt;&lt;/strong&gt; , qui sera probablement disponible si d’autres outils Jupyter sont installés :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@julia50b1000001:~&lt;span class="nv"&gt;$ &lt;/span&gt;jupyter lite serve 

Serving JupyterLite Debug Server from:
            /home/ubuntu/dist/_output
        on:
            http://127.0.0.1:8000/index.html

        &lt;span class="k"&gt;***&lt;/span&gt;Exit by:&lt;span class="k"&gt;***&lt;/span&gt;
            - Pressing Ctrl+C
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OSyBbEgA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AmjUJneYlznfGVFJy16pnzw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OSyBbEgA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AmjUJneYlznfGVFJy16pnzw.png" alt="" width="880" height="623"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Il ne me reste plus qu’à l’empaqueter pour le déployer dans la plateforme fly.io. Pour cela je m’inspire de la fiche documentaire consacrée au déploiement de contenus statiques :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://fly.io/docs/languages-and-frameworks/static/"&gt;Run a Static Website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Installation du client flyctl :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@julia50b1000001:~&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-L&lt;/span&gt; https://fly.io/install.sh | sh

  % Total % Received % Xferd Average Speed Time Time Time Current
                                 Dload Upload Total Spent Left Speed
100 1475 0 1475 0 0 6089 0 &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- &lt;span class="nt"&gt;--&lt;/span&gt;:--:-- 6069
&lt;span class="c"&gt;######################################################################## 100.0%&lt;/span&gt;
&lt;span class="nb"&gt;set &lt;/span&gt;channel to shell
flyctl was installed successfully to /home/ubuntu/.fly/bin/flyctl
Manually add the directory to your &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.bash_profile &lt;span class="o"&gt;(&lt;/span&gt;or similar&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;FLYCTL_INSTALL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/home/ubuntu/.fly"&lt;/span&gt;
  &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$FLYCTL_INSTALL&lt;/span&gt;&lt;span class="s2"&gt;/bin:&lt;/span&gt;&lt;span class="nv"&gt;$PATH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
Run &lt;span class="s1"&gt;'/home/ubuntu/.fly/bin/flyctl --help'&lt;/span&gt; to get started

&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@julia50b1000001:~&lt;span class="nv"&gt;$ &lt;/span&gt;flyctl auth login

&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@julia50b1000001:~&lt;span class="nv"&gt;$ &lt;/span&gt;flyctl
This is flyctl, the Fly.io &lt;span class="nb"&gt;command &lt;/span&gt;line interface.

It doesn&lt;span class="s1"&gt;'t look like you'&lt;/span&gt;re logged &lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; Try &lt;span class="s2"&gt;"flyctl auth signup"&lt;/span&gt; to create an account,
or &lt;span class="s2"&gt;"flyctl auth login"&lt;/span&gt; to log &lt;span class="k"&gt;in &lt;/span&gt;to an existing account.

flyctl does a lot of stuff! Don&lt;span class="s1"&gt;'t panic, it'&lt;/span&gt;s easy to get started:

  &lt;span class="k"&gt;*&lt;/span&gt; fly launch: launch a new application &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"fly help launch"&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;more&lt;span class="o"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;*&lt;/span&gt; fly apps: create and manage apps &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"fly help apps"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;*&lt;/span&gt; fly machines: create and manage individual Fly.io machines &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"fly help machines"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;*&lt;/span&gt; fly postgres: create and manage Postgres databases &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"fly help postgres"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;*&lt;/span&gt; fly redis: create and manage Redis databases &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"fly help redis"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;*&lt;/span&gt; fly &lt;span class="nb"&gt;help&lt;/span&gt;: &lt;span class="k"&gt;for &lt;/span&gt;more &lt;span class="nb"&gt;help&lt;/span&gt;, and a &lt;span class="nb"&gt;complete &lt;/span&gt;list of commands.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;J’ajoute ce fichier Dockerfile avec ce contenu :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@julia50b1000001:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls
&lt;/span&gt;Dockerfile dist mambaforge

&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@julia50b1000001:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;Dockerfile 

FROM pierrezemb/gostatic
COPY ./dist/ /srv/http/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Génération du fichier fly.toml qui va servir au déploiement dans la plateforme …&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@julia50b1000001:~&lt;span class="nv"&gt;$ &lt;/span&gt;flyctl launch

Creating app &lt;span class="k"&gt;in&lt;/span&gt; /home/ubuntu
Scanning &lt;span class="nb"&gt;source &lt;/span&gt;code
Detected a Dockerfile app
? Choose an app name &lt;span class="o"&gt;(&lt;/span&gt;leave blank to generate one&lt;span class="o"&gt;)&lt;/span&gt;: 
automatically selected personal organization: Karim
Some regions require a paid plan &lt;span class="o"&gt;(&lt;/span&gt;fra, maa&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
See https://fly.io/plans to &lt;span class="nb"&gt;set &lt;/span&gt;up a plan.

? Choose a region &lt;span class="k"&gt;for &lt;/span&gt;deployment: Paris, France &lt;span class="o"&gt;(&lt;/span&gt;cdg&lt;span class="o"&gt;)&lt;/span&gt;
Created app old-sunset-3102 &lt;span class="k"&gt;in &lt;/span&gt;organization personal
Admin URL: https://fly.io/apps/old-sunset-3102
Hostname: old-sunset-3102.fly.dev
? Would you like to &lt;span class="nb"&gt;set &lt;/span&gt;up a Postgresql database now? No
? Would you like to &lt;span class="nb"&gt;set &lt;/span&gt;up an Upstash Redis database now? No
? Create .dockerignore from 4 .gitignore files? No
Wrote config file fly.toml
? Would you like to deploy now? No
Your app is ready! Deploy with &lt;span class="sb"&gt;`&lt;/span&gt;flyctl deploy&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Modifié pour prendre en compte le fait que le futur conteneur utilisera le port 8043 par défaut :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="err"&gt;(base)&lt;/span&gt; &lt;span class="err"&gt;ubuntu@julia&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="err"&gt;b&lt;/span&gt;&lt;span class="mi"&gt;1000001&lt;/span&gt;&lt;span class="err"&gt;:~$&lt;/span&gt; &lt;span class="err"&gt;cat&lt;/span&gt; &lt;span class="err"&gt;fly.toml&lt;/span&gt; 

&lt;span class="c"&gt;# fly.toml file generated for old-sunset-3102 on 2023-03-01T22:42:29Z&lt;/span&gt;

&lt;span class="py"&gt;app&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"old-sunset-3102"&lt;/span&gt;
&lt;span class="py"&gt;kill_signal&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"SIGINT"&lt;/span&gt;
&lt;span class="py"&gt;kill_timeout&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;
&lt;span class="py"&gt;processes&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

&lt;span class="nn"&gt;[env]&lt;/span&gt;

&lt;span class="nn"&gt;[experimental]&lt;/span&gt;
  &lt;span class="py"&gt;auto_rollback&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;span class="nn"&gt;[[services]]&lt;/span&gt;
  &lt;span class="py"&gt;http_checks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
  &lt;span class="py"&gt;internal_port&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8043&lt;/span&gt;
  &lt;span class="py"&gt;processes&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;["app"]&lt;/span&gt;
  &lt;span class="py"&gt;protocol&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"tcp"&lt;/span&gt;
  &lt;span class="py"&gt;script_checks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
  &lt;span class="nn"&gt;[services.concurrency]&lt;/span&gt;
    &lt;span class="py"&gt;hard_limit&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;25&lt;/span&gt;
    &lt;span class="py"&gt;soft_limit&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;
    &lt;span class="py"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"connections"&lt;/span&gt;

  &lt;span class="nn"&gt;[[services.ports]]&lt;/span&gt;
    &lt;span class="py"&gt;force_https&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="py"&gt;handlers&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;["http"]&lt;/span&gt;
    &lt;span class="py"&gt;port&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;

  &lt;span class="nn"&gt;[[services.ports]]&lt;/span&gt;
    &lt;span class="py"&gt;handlers&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"tls"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"http"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="py"&gt;port&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;443&lt;/span&gt;

  &lt;span class="nn"&gt;[[services.tcp_checks]]&lt;/span&gt;
    &lt;span class="py"&gt;grace_period&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"1s"&lt;/span&gt;
    &lt;span class="py"&gt;interval&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"15s"&lt;/span&gt;
    &lt;span class="py"&gt;restart_limit&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="py"&gt;timeout&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"2s"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lancement du déploiement :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;(&lt;/span&gt;base&lt;span class="o"&gt;)&lt;/span&gt; ubuntu@julia50b1000001:~&lt;span class="nv"&gt;$ &lt;/span&gt;flyctl deploy
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Verifying app config
&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Verified app config
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Building image
Remote builder fly-builder-small-water-3057 ready
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Creating build context
&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Creating build context &lt;span class="k"&gt;done&lt;/span&gt;
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Building image with Docker
&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; docker host: 20.10.12 linux x86_64
&lt;span class="o"&gt;[&lt;/span&gt;+] Building 54.5s &lt;span class="o"&gt;(&lt;/span&gt;0/1&lt;span class="o"&gt;)&lt;/span&gt;                                                                                                                                                           
&lt;span class="o"&gt;[&lt;/span&gt;+] Building 84.2s &lt;span class="o"&gt;(&lt;/span&gt;0/1&lt;span class="o"&gt;)&lt;/span&gt;                                                                                                                                                           
&lt;span class="o"&gt;[&lt;/span&gt;+] Building 16.3s &lt;span class="o"&gt;(&lt;/span&gt;6/6&lt;span class="o"&gt;)&lt;/span&gt; FINISHED                                                                                                                                                  
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;internal] load remote build context 0.0s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; copy /context / 11.6s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;internal] load metadata &lt;span class="k"&gt;for &lt;/span&gt;docker.io/pierrezemb/gostatic:latest 0.7s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;1/2] FROM docker.io/pierrezemb/gostatic@sha256:7e5718f98f2172f7c8dffd152ef0b203873ba889c8d838b2e730484fc71f6acd 0.1s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; resolve docker.io/pierrezemb/gostatic@sha256:7e5718f98f2172f7c8dffd152ef0b203873ba889c8d838b2e730484fc71f6acd 0.0s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; extracting sha256:0cf3c901807f7df57d792cd4a926ac2eb4078eb337750316dbde44bc7e7acd83 0.1s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; sha256:0cf3c901807f7df57d792cd4a926ac2eb4078eb337750316dbde44bc7e7acd83 1.88MB / 1.88MB 0.0s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; sha256:7e5718f98f2172f7c8dffd152ef0b203873ba889c8d838b2e730484fc71f6acd 2.67kB / 2.67kB 0.0s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; sha256:f846dcfe68518bd5a624acb44abee440deedfca894e641b7947ba494f6e0f18a 527B / 527B 0.0s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; sha256:37dd3994986381311fdfc59ea190c8e60c6c8c5a38f3cdec3419ee8b333c9fa9 915B / 915B 0.0s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;2/2] COPY ./dist/ /srv/http/ 0.3s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; exporting to image 0.5s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; exporting layers 0.5s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; writing image sha256:29bf836545e5b679b481077dc137a880d90baa8ca49839b603e5c4a2966133ea 0.0s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; naming to registry.fly.io/old-sunset-3102:deployment-01GTFNNV4ZS5S210YX9MPXXCNW 0.0s
&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Building image &lt;span class="k"&gt;done&lt;/span&gt;
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Pushing image to fly
The push refers to repository &lt;span class="o"&gt;[&lt;/span&gt;registry.fly.io/old-sunset-3102]
688585856d25: Pushed 
f347b3d1982a: Pushed 
deployment-01GTFNNV4ZS5S210YX9MPXXCNW: digest: sha256:0c085f048ab642afe7481d6b85c4444b6ac5d3f553717a9527fa53170de9853f size: 740
&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Pushing image &lt;span class="k"&gt;done
&lt;/span&gt;image: registry.fly.io/old-sunset-3102:deployment-01GTFNNV4ZS5S210YX9MPXXCNW
image size: 91 MB
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Creating release
&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; release v2 created

&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; You can detach the terminal anytime without stopping the deployment
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Monitoring deployment
Logs: https://fly.io/apps/old-sunset-3102/monitoring

 1 desired, 1 placed, 1 healthy, 0 unhealthy &lt;span class="o"&gt;[&lt;/span&gt;health checks: 1 total, 1 passing]
&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; v0 deployed successfully
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cette version v0 est opérationnelle :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---URE9-m9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ah7XhfMgrqZXJZiM0dhSsvQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---URE9-m9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ah7XhfMgrqZXJZiM0dhSsvQ.png" alt="" width="880" height="626"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PU5-MfK9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AuHvyG23r_ylMybwZVf1gXg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PU5-MfK9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AuHvyG23r_ylMybwZVf1gXg.png" alt="" width="880" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C3EcKZKu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AM7l3KESZc0TDpNKXtfWDsw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C3EcKZKu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AM7l3KESZc0TDpNKXtfWDsw.png" alt="" width="880" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Accompagnée de son monitoring :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v0BSHesR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AwypC3_KRegnLpnA8eKpX2w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v0BSHesR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AwypC3_KRegnLpnA8eKpX2w.png" alt="" width="880" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qQJ_h5Zw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AHdvesPhUvvDpQOcYOy5N_A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qQJ_h5Zw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AHdvesPhUvvDpQOcYOy5N_A.png" alt="" width="880" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LJ8JbQVF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A60M6Pu-boRuTTO0K09L5vQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LJ8JbQVF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A60M6Pu-boRuTTO0K09L5vQ.png" alt="" width="880" height="1463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Et les Kernels sont fonctionnels :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SiqAmeRQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AL-p-kve1f4ZfbFJv4P-cCg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SiqAmeRQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AL-p-kve1f4ZfbFJv4P-cCg.png" alt="" width="880" height="607"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CKmz3qC0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AfQIrN2gEVzCBVRIGE9WQ_w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CKmz3qC0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AfQIrN2gEVzCBVRIGE9WQ_w.png" alt="" width="880" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c9p4X7yd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ar06NivxvvizBZRy_mNMBfw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c9p4X7yd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ar06NivxvvizBZRy_mNMBfw.png" alt="" width="880" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--40SzACJ5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Al1NncoF2uNePSEB5PkMZPg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--40SzACJ5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Al1NncoF2uNePSEB5PkMZPg.png" alt="" width="880" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0lzgKkQm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AMGJBUiVJDXUy-prvYfw42A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0lzgKkQm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AMGJBUiVJDXUy-prvYfw42A.png" alt="" width="880" height="607"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Avec suppression à la fois du volume et du conteneur associé à la fin de cette expérience …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lybk58kv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Akel6itt-T5Cck9uyi5YnNw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lybk58kv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Akel6itt-T5Cck9uyi5YnNw.png" alt="" width="880" height="586"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tzI3gIdB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AoIax6aUsEiMyEbmAXNRH_w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tzI3gIdB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AoIax6aUsEiMyEbmAXNRH_w.png" alt="" width="880" height="586"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pour aller plus loin avec JupyterLite, ces prises en charges de Langages et Frameworks :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://fly.io/docs/languages-and-frameworks/"&gt;Language &amp;amp; Framework Guides&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sachant que pour Fly.io, les orchestrateurs relient des grappes de serveurs de travail et offrent une API pour exécuter des tâches sur ces grappes avec HashiCorp Nomad et ces mécanismes exposés dans cet article :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://fly.io/blog/carving-the-scheduler-out-of-our-orchestrator/"&gt;Carving The Scheduler Out Of Our Orchestrator&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jOzGfFXZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AHw7dO99gdVDO_vJa" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jOzGfFXZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2AHw7dO99gdVDO_vJa" alt="" width="880" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Up6g6P77--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/685/0%2AFAptf0oPGfd4W-rS.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Up6g6P77--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/685/0%2AFAptf0oPGfd4W-rS.jpg" alt="" width="685" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;À Suivre !&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>docker</category>
      <category>jupyter</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Remplacer MetalLB par Cilium BGP pour la mise en œuvre d’un équilibreur de charge dans Kubernetes …</title>
      <dc:creator>Karim</dc:creator>
      <pubDate>Wed, 01 Mar 2023 21:32:45 +0000</pubDate>
      <link>https://forem.com/deep75/remplacer-metallb-par-cilium-bgp-pour-la-mise-en-oeuvre-dun-equilibreur-de-charge-dans-kubernetes--34li</link>
      <guid>https://forem.com/deep75/remplacer-metallb-par-cilium-bgp-pour-la-mise-en-oeuvre-dun-equilibreur-de-charge-dans-kubernetes--34li</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ru634Yau--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Az6LOY0mb7v8vmlYxfVGWdA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ru634Yau--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Az6LOY0mb7v8vmlYxfVGWdA.png" alt="" width="880" height="560"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;a href="https://cilium.io/"&gt;https://cilium.io/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Cilium est un logiciel open source permettant de sécuriser de manière transparente la connectivité réseau entre les services applicatifs déployés à l’aide de plateformes de gestion de conteneurs Linux comme Docker et Kubernetes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;À la base de Cilium se trouve une nouvelle technologie du noyau Linux appelée eBPF, qui permet l’insertion dynamique d’une puissante logique de visibilité et de contrôle de la sécurité au sein même de Linux. Comme eBPF s’exécute à l’intérieur du noyau Linux, les politiques de sécurité de Cilium peuvent être appliquées et mises à jour sans aucune modification du code de l’application ou de la configuration du conteneur.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://docs.cilium.io/en/stable/overview/intro/"&gt;Introduction to Cilium &amp;amp; Hubble - Cilium 1.13.0 documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FG2el9uN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/770/0%2AKsmSIeIsE--n3pUT.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FG2el9uN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/770/0%2AKsmSIeIsE--n3pUT.png" alt="" width="770" height="890"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://isovalent.com/blog/post/how-the-hive-came-to-bee-a-story-of-ebpf-and-cilium-so-far/"&gt;"How the Hive Came To Bee" - a story of eBPF and Cilium so far - Isovalent&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;BGP fournit un moyen d’annoncer des routes à l’aide de protocoles de réseau traditionnels pour permettre aux services gérés par Cilium d’être accessibles en dehors du cluster. Ici on va voir comment configurer la prise en charge native de Cilium pour annoncer les IP des services de l’équilibreur de charge et la plage CIDR de Pod d’un nœud Kubernetes via BGP.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Il s’appuie sur l’implémentation simple et efficace de l’allocation d’IP de MetalLB et sur le support minimal du protocole BGP pour ce faire. La configuration de Cilium est la même que celle de MetalLB. Cette fonctionnalité demeure encore en version beta :&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.cilium.io/en/stable/network/bgp/"&gt;BGP (beta) - Cilium 1.13.0 documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://deep75.medium.com/d%C3%A9ploiement-dun-cluster-kubernetes-sur-du-baremetal-dans-scaleway-avec-kontena-pharos-metallb-et-7ce6a5144d16"&gt;Déploiement d’un cluster Kubernetes sur du BareMetal dans Scaleway avec Kontena Pharos, MetalLB et…&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://metallb.universe.tf/"&gt;MetalLB&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Plus précisément, si un service de type LoadBalancer est créé, Cilium lui allouera une IP à partir d’un pool spécifié. Une fois l’IP allouée, les agents annonceront via BGP en fonction de la ExternalTrafficPolicy du service. Application ici avec le lancement d’une instance dans Linode (maintenant Akamai) avec Ubuntu 22.04 LTS :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.lemondeinformatique.fr/actualites/lire-akamai-prend-le-virage-du-cloud-computing-89536.html"&gt;Akamai prend le virage du cloud computing - Le Monde Informatique&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;linode-cli linodes create &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--image&lt;/span&gt; &lt;span class="s1"&gt;'linode/ubuntu22.04'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; eu-central &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--type&lt;/span&gt; g7-highmem-1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--label&lt;/span&gt; lxdserver&lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--authorized_users&lt;/span&gt; &lt;span class="nv"&gt;$USER&lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--booted&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--backups_enabled&lt;/span&gt; &lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--private_ip&lt;/span&gt; &lt;span class="nb"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://www.linode.com/docs/products/tools/cli/get-started/"&gt;Get Started&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Gb46goRp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AcCsKgc0t-GgXFVtBvWnyxA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Gb46goRp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AcCsKgc0t-GgXFVtBvWnyxA.png" alt="" width="880" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Installation de l’hyperviseur LXD sur cette instance Ubuntu 22.04 LTS :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@localhost:~# snap &lt;span class="nb"&gt;install &lt;/span&gt;lxd

2023-03-01T19:27:40Z INFO Waiting &lt;span class="k"&gt;for &lt;/span&gt;automatic snapd restart...
lxd 5.11-ad0b61e from Canonical✓ installed

root@localhost:~# lxd init

Would you like to use LXD clustering? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;no]: 
Do you want to configure a new storage pool? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;: 
Name of the new storage pool &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;default]: 
Name of the storage backend to use &lt;span class="o"&gt;(&lt;/span&gt;lvm, zfs, btrfs, ceph, &lt;span class="nb"&gt;dir&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;zfs]: &lt;span class="nb"&gt;dir
&lt;/span&gt;Would you like to connect to a MAAS server? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;no]: 
Would you like to create a new &lt;span class="nb"&gt;local &lt;/span&gt;network bridge? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;: 
What should the new bridge be called? &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;lxdbr0]: 
What IPv4 address should be used? &lt;span class="o"&gt;(&lt;/span&gt;CIDR subnet notation, “auto” or “none”&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;auto]: 
What IPv6 address should be used? &lt;span class="o"&gt;(&lt;/span&gt;CIDR subnet notation, “auto” or “none”&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;auto]: none
Would you like the LXD server to be available over the network? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;no]: 
Would you like stale cached images to be updated automatically? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;: 
Would you like a YAML &lt;span class="s2"&gt;"lxd init"&lt;/span&gt; preseed to be printed? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt;/no&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;no]: 
root@localhost:~# lxc &lt;span class="nb"&gt;ls
&lt;/span&gt;To start your first container, try: lxc launch ubuntu:22.04
Or &lt;span class="k"&gt;for &lt;/span&gt;a virtual machine: lxc launch ubuntu:22.04 &lt;span class="nt"&gt;--vm&lt;/span&gt;

+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configuration du profil par défaut dans LXD :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@localhost:~# ssh-keygen

Generating public/private rsa key pair.

Enter file &lt;span class="k"&gt;in &lt;/span&gt;which to save the key &lt;span class="o"&gt;(&lt;/span&gt;/root/.ssh/id_rsa&lt;span class="o"&gt;)&lt;/span&gt;: Enter passphrase &lt;span class="o"&gt;(&lt;/span&gt;empty &lt;span class="k"&gt;for &lt;/span&gt;no passphrase&lt;span class="o"&gt;)&lt;/span&gt;: 
Enter same passphrase again: 
Your identification has been saved &lt;span class="k"&gt;in&lt;/span&gt; /root/.ssh/id_rsa
Your public key has been saved &lt;span class="k"&gt;in&lt;/span&gt; /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:Nk0YMZ5JM5i5EMOjZ/YTJ+GiWNTOhv/9jhLtVAz7f8k root@localhost
The key&lt;span class="s1"&gt;'s randomart image is:
+---[RSA 3072]----+
| oo. +B. |
| . =.=o.O |
| . = + o=+. |
| + O = ooo |
| o B o =So. |
|. . . +.o.. |
| . * . . . |
| o o. . E |
| ..oo . |
+----[SHA256]-----+
root@localhost:~# lxc profile show default &amp;gt; lxd-profile-default.yaml

root@localhost:~# cat lxd-profile-default.yaml 
config:
  user.user-data: |
    #cloud-config
    ssh_authorized_keys:
      - @@SSHPUB@@
  environment.http_proxy: ""
  user.network_mode: ""
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by: []

root@localhost:~# sed -ri "s'&lt;/span&gt;@@SSHPUB@@&lt;span class="s1"&gt;'$(cat ~/.ssh/id_rsa.pub)'&lt;/span&gt;&lt;span class="s2"&gt;" lxd-profile-default.yaml

root@localhost:~# lxc profile edit default &amp;lt; lxd-profile-default.yaml
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;et d’un profil dédié pour l”installation de Kubernetes :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@localhost:~# lxc profile create k8s

Profile k8s created
root@localhost:~# wget https://raw.githubusercontent.com/ubuntu/microk8s/master/tests/lxc/microk8s.profile &lt;span class="nt"&gt;-O&lt;/span&gt; microk8s.profile
Resolving raw.githubusercontent.com &lt;span class="o"&gt;(&lt;/span&gt;raw.githubusercontent.com&lt;span class="o"&gt;)&lt;/span&gt;... 2606:50c0:8000::154, 2606:50c0:8001::154, 2606:50c0:8002::154, ...
Connecting to raw.githubusercontent.com &lt;span class="o"&gt;(&lt;/span&gt;raw.githubusercontent.com&lt;span class="o"&gt;)&lt;/span&gt;|2606:50c0:8000::154|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 816 &lt;span class="o"&gt;[&lt;/span&gt;text/plain]
Saving to: ‘microk8s.profile’

microk8s.profile 100%[&lt;span class="o"&gt;=============================================================================================&amp;gt;]&lt;/span&gt; 816 &lt;span class="nt"&gt;--&lt;/span&gt;.-KB/s &lt;span class="k"&gt;in &lt;/span&gt;0s      

root@localhost:~# &lt;span class="nb"&gt;cat &lt;/span&gt;microk8s.profile | lxc profile edit k8s

root@localhost:~# &lt;span class="nb"&gt;rm &lt;/span&gt;microk8s.profile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lancement de trois instances LXC avec Ubuntu 22.04 LTS et de ce profil k8s :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@localhost:~# &lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;1..3&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;lxc launch &lt;span class="nt"&gt;-p&lt;/span&gt; default &lt;span class="nt"&gt;-p&lt;/span&gt; k8s ubuntu:22.04 k3s&lt;span class="nv"&gt;$i&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done

&lt;/span&gt;Creating k3s1
Starting k3s1                               
Creating k3s2
Starting k3s2                               
Creating k3s3
Starting k3s3    

root@localhost:~# lxc &lt;span class="nb"&gt;ls&lt;/span&gt;

+------+---------+---------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+---------+---------------------+------+-----------+-----------+
| k3s1 | RUNNING | 10.26.23.163 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | | CONTAINER | 0 |
+------+---------+---------------------+------+-----------+-----------+
| k3s2 | RUNNING | 10.26.23.75 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | | CONTAINER | 0 |
+------+---------+---------------------+------+-----------+-----------+
| k3s3 | RUNNING | 10.26.23.87 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | | CONTAINER | 0 |
+------+---------+---------------------+------+-----------+-----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Je procède à l’installation d’un cluster k3s conformément à ce que préconise la documentation fournie par Cilium.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;La première étape consiste en effet à installer un nœud maître K3s en veillant à désactiver la prise en charge du plugin CNI par défaut et de l’applicateur de politique réseau intégré :&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://docs.cilium.io/en/v1.13/installation/k3s/"&gt;Installation Using K3s - Cilium 1.13.0 documentation&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@localhost:~# ssh ubuntu@10.26.23.163

Warning: Permanently added &lt;span class="s1"&gt;'10.26.23.163'&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;ED25519&lt;span class="o"&gt;)&lt;/span&gt; to the list of known hosts.

The programs included with the Ubuntu system are free software&lt;span class="p"&gt;;&lt;/span&gt;
the exact distribution terms &lt;span class="k"&gt;for &lt;/span&gt;each program are described &lt;span class="k"&gt;in &lt;/span&gt;the
individual files &lt;span class="k"&gt;in&lt;/span&gt; /usr/share/doc/&lt;span class="k"&gt;*&lt;/span&gt;/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a &lt;span class="nb"&gt;command &lt;/span&gt;as administrator &lt;span class="o"&gt;(&lt;/span&gt;user &lt;span class="s2"&gt;"root"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;, use &lt;span class="s2"&gt;"sudo &amp;lt;command&amp;gt;"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
See &lt;span class="s2"&gt;"man sudo_root"&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;details.

ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-sfL&lt;/span&gt; https://get.k3s.io | &lt;span class="nv"&gt;INSTALL_K3S_EXEC&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'--flannel-backend=none --disable-network-policy'&lt;/span&gt; sh -

&lt;span class="o"&gt;[&lt;/span&gt;INFO] Finding release &lt;span class="k"&gt;for &lt;/span&gt;channel stable
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Using v1.25.6+k3s1 as release
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Downloading &lt;span class="nb"&gt;hash &lt;/span&gt;https://github.com/k3s-io/k3s/releases/download/v1.25.6+k3s1/sha256sum-amd64.txt
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.25.6+k3s1/k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Verifying binary download
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Installing k3s to /usr/local/bin/k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Skipping installation of SELinux RPM
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating /usr/local/bin/kubectl symlink to k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating /usr/local/bin/crictl symlink to k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating /usr/local/bin/ctr symlink to k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating killall script /usr/local/bin/k3s-killall.sh
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
&lt;span class="o"&gt;[&lt;/span&gt;INFO] &lt;span class="nb"&gt;env&lt;/span&gt;: Creating environment file /etc/systemd/system/k3s.service.env
&lt;span class="o"&gt;[&lt;/span&gt;INFO] systemd: Creating service file /etc/systemd/system/k3s.service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
&lt;span class="o"&gt;[&lt;/span&gt;INFO] systemd: Starting k3s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Après récupération du &lt;strong&gt;&lt;em&gt;node-token&lt;/em&gt;&lt;/strong&gt; , installation des deux workers dans ce cluster k3s :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@localhost:~# ssh ubuntu@10.26.23.75
Warning: Permanently added &lt;span class="s1"&gt;'10.26.23.75'&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;ED25519&lt;span class="o"&gt;)&lt;/span&gt; to the list of known hosts.

The programs included with the Ubuntu system are free software&lt;span class="p"&gt;;&lt;/span&gt;
the exact distribution terms &lt;span class="k"&gt;for &lt;/span&gt;each program are described &lt;span class="k"&gt;in &lt;/span&gt;the
individual files &lt;span class="k"&gt;in&lt;/span&gt; /usr/share/doc/&lt;span class="k"&gt;*&lt;/span&gt;/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a &lt;span class="nb"&gt;command &lt;/span&gt;as administrator &lt;span class="o"&gt;(&lt;/span&gt;user &lt;span class="s2"&gt;"root"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;, use &lt;span class="s2"&gt;"sudo &amp;lt;command&amp;gt;"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
See &lt;span class="s2"&gt;"man sudo_root"&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;details.

ubuntu@k3s2:~&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-sfL&lt;/span&gt; https://get.k3s.io | &lt;span class="nv"&gt;K3S_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'https://10.26.23.163:6443'&lt;/span&gt; &lt;span class="nv"&gt;K3S_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;K100df339c075cfa287710648ed7ba381667496813d04b7603150856c8c673cc07b::server:2300bb645db019e47caaa971f0f8460b sh -

&lt;span class="o"&gt;[&lt;/span&gt;INFO] Finding release &lt;span class="k"&gt;for &lt;/span&gt;channel stable
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Using v1.25.6+k3s1 as release
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Downloading &lt;span class="nb"&gt;hash &lt;/span&gt;https://github.com/k3s-io/k3s/releases/download/v1.25.6+k3s1/sha256sum-amd64.txt
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.25.6+k3s1/k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Verifying binary download
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Installing k3s to /usr/local/bin/k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Skipping installation of SELinux RPM
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating /usr/local/bin/kubectl symlink to k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating /usr/local/bin/crictl symlink to k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating /usr/local/bin/ctr symlink to k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating killall script /usr/local/bin/k3s-killall.sh
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
&lt;span class="o"&gt;[&lt;/span&gt;INFO] &lt;span class="nb"&gt;env&lt;/span&gt;: Creating environment file /etc/systemd/system/k3s-agent.service.env
&lt;span class="o"&gt;[&lt;/span&gt;INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
&lt;span class="o"&gt;[&lt;/span&gt;INFO] systemd: Starting k3s-agent
ubuntu@k3s2:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;exit
logout
&lt;/span&gt;Connection to 10.26.23.75 closed.

root@localhost:~# ssh ubuntu@10.26.23.87
Warning: Permanently added &lt;span class="s1"&gt;'10.26.23.87'&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;ED25519&lt;span class="o"&gt;)&lt;/span&gt; to the list of known hosts.

The programs included with the Ubuntu system are free software&lt;span class="p"&gt;;&lt;/span&gt;
the exact distribution terms &lt;span class="k"&gt;for &lt;/span&gt;each program are described &lt;span class="k"&gt;in &lt;/span&gt;the
individual files &lt;span class="k"&gt;in&lt;/span&gt; /usr/share/doc/&lt;span class="k"&gt;*&lt;/span&gt;/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a &lt;span class="nb"&gt;command &lt;/span&gt;as administrator &lt;span class="o"&gt;(&lt;/span&gt;user &lt;span class="s2"&gt;"root"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;, use &lt;span class="s2"&gt;"sudo &amp;lt;command&amp;gt;"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
See &lt;span class="s2"&gt;"man sudo_root"&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;details.

ubuntu@k3s3:~&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-sfL&lt;/span&gt; https://get.k3s.io | &lt;span class="nv"&gt;K3S_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'https://10.26.23.163:6443'&lt;/span&gt; &lt;span class="nv"&gt;K3S_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;K100df339c075cfa287710648ed7ba381667496813d04b7603150856c8c673cc07b::server:2300bb645db019e47caaa971f0f8460b sh -

&lt;span class="o"&gt;[&lt;/span&gt;INFO] Finding release &lt;span class="k"&gt;for &lt;/span&gt;channel stable
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Using v1.25.6+k3s1 as release
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Downloading &lt;span class="nb"&gt;hash &lt;/span&gt;https://github.com/k3s-io/k3s/releases/download/v1.25.6+k3s1/sha256sum-amd64.txt
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.25.6+k3s1/k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Verifying binary download
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Installing k3s to /usr/local/bin/k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Skipping installation of SELinux RPM
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating /usr/local/bin/kubectl symlink to k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating /usr/local/bin/crictl symlink to k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating /usr/local/bin/ctr symlink to k3s
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating killall script /usr/local/bin/k3s-killall.sh
&lt;span class="o"&gt;[&lt;/span&gt;INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
&lt;span class="o"&gt;[&lt;/span&gt;INFO] &lt;span class="nb"&gt;env&lt;/span&gt;: Creating environment file /etc/systemd/system/k3s-agent.service.env
&lt;span class="o"&gt;[&lt;/span&gt;INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service
&lt;span class="o"&gt;[&lt;/span&gt;INFO] systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
&lt;span class="o"&gt;[&lt;/span&gt;INFO] systemd: Starting k3s-agent

ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; .kube &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo cp&lt;/span&gt; /etc/rancher/k3s/k3s.yaml .kube/config &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; &lt;span class="nv"&gt;$USER&lt;/span&gt; .kube/&lt;span class="k"&gt;*&lt;/span&gt;

ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-LO&lt;/span&gt; https://storage.googleapis.com/kubernetes-release/release/v1.25.6/bin/linux/amd64/kubectl &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chmod&lt;/span&gt; +x kubectl &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo rm&lt;/span&gt; /usr/local/bin/kubectl &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo mv &lt;/span&gt;kubectl /usr/bin

ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl cluster-info

Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy

To further debug and diagnose cluster problems, use &lt;span class="s1"&gt;'kubectl cluster-info dump'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;

ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3s2 NotReady &amp;lt;none&amp;gt; 3m31s v1.25.6+k3s1 10.26.23.75 &amp;lt;none&amp;gt; Ubuntu 22.04.1 LTS 5.15.0-60-generic containerd://1.6.15-k3s1
k3s3 NotReady &amp;lt;none&amp;gt; 3m8s v1.25.6+k3s1 10.26.23.87 &amp;lt;none&amp;gt; Ubuntu 22.04.1 LTS 5.15.0-60-generic containerd://1.6.15-k3s1
k3s1 NotReady control-plane,master 7m19s v1.25.6+k3s1 10.26.23.163 &amp;lt;none&amp;gt; Ubuntu 22.04.1 LTS 5.15.0-60-generic containerd://1.6.15-k3s1

ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get po,svc &lt;span class="nt"&gt;-A&lt;/span&gt;
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/metrics-server-5f9f776df5-jzq2f 0/1 Pending 0 7m13s
kube-system pod/local-path-provisioner-79f67d76f8-tg2dj 0/1 Pending 0 7m13s
kube-system pod/coredns-597584b69b-4bg2h 0/1 Pending 0 7m13s
kube-system pod/helm-install-traefik-k9m7j 0/1 Pending 0 7m14s
kube-system pod/helm-install-traefik-crd-4vh8j 0/1 Pending 0 7m14s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
default service/kubernetes ClusterIP 10.43.0.1 &amp;lt;none&amp;gt; 443/TCP 7m28s
kube-system service/kube-dns ClusterIP 10.43.0.10 &amp;lt;none&amp;gt; 53/UDP,53/TCP,9153/TCP 7m25s
kube-system service/metrics-server ClusterIP 10.43.252.18 &amp;lt;none&amp;gt; 443/TCP 7m24s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Suivi de l’installation de Helm 3 :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://helm.sh/docs/intro/install/"&gt;Installing Helm&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Downloading https://get.helm.sh/helm-v3.11.1-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to &lt;span class="nb"&gt;install &lt;/span&gt;helm into /usr/local/bin
helm installed into /usr/local/bin/helm
ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;chmod &lt;/span&gt;400 .kube/config
ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;helm &lt;span class="nb"&gt;ls
&lt;/span&gt;NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;suivi de Cilium BGP (beta) …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.cilium.io/en/stable/network/bgp/"&gt;BGP (beta) - Cilium 1.13.0 documentation&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;helm repo add cilium https://helm.cilium.io/
&lt;span class="s2"&gt;"cilium"&lt;/span&gt; has been added to your repositories

ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;helm repo update
Hang tight &lt;span class="k"&gt;while &lt;/span&gt;we grab the latest from your chart repositories...
...Successfully got an update from the &lt;span class="s2"&gt;"cilium"&lt;/span&gt; chart repository
Update Complete. ⎈Happy Helming!⎈
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Le support BGP est activé en fournissant la configuration BGP via un ConfigMap et en définissant quelques valeurs avec Helm. Sinon, BGP est désactivé par défaut. Je me base pour cela sur la passerelle fournie dans LXD :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@localhost:~# ifconfig lxdbr0
lxdbr0: &lt;span class="nv"&gt;flags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4163&amp;lt;UP,BROADCAST,RUNNING,MULTICAST&amp;gt; mtu 1500
        inet 10.26.23.1 netmask 255.255.255.0 broadcast 0.0.0.0
        ether 00:16:3e:a3:ba:e8 txqueuelen 1000 &lt;span class="o"&gt;(&lt;/span&gt;Ethernet&lt;span class="o"&gt;)&lt;/span&gt;
        RX packets 8742 bytes 563385 &lt;span class="o"&gt;(&lt;/span&gt;563.3 KB&lt;span class="o"&gt;)&lt;/span&gt;
        RX errors 0 dropped 0 overruns 0 frame 0
        TX packets 18343 bytes 264191495 &lt;span class="o"&gt;(&lt;/span&gt;264.1 MB&lt;span class="o"&gt;)&lt;/span&gt;
        TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

root@localhost:~# lxc &lt;span class="nb"&gt;ls&lt;/span&gt;
+------+---------+---------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+---------+---------------------+------+-----------+-----------+
| k3s1 | RUNNING | 10.26.23.163 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | | CONTAINER | 0 |
+------+---------+---------------------+------+-----------+-----------+
| k3s2 | RUNNING | 10.26.23.75 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | | CONTAINER | 0 |
+------+---------+---------------------+------+-----------+-----------+
| k3s3 | RUNNING | 10.26.23.87 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | | CONTAINER | 0 |
+------+---------+---------------------+------+-----------+-----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;et ce segment d’adresses IP libres :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iZxvrmMS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ALMl4yRG9YuXWoVOqBUlG6Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iZxvrmMS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ALMl4yRG9YuXWoVOqBUlG6Q.png" alt="" width="880" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5Brpqw6K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ag8To-qU2sJxVpM2A3B2Fqw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5Brpqw6K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ag8To-qU2sJxVpM2A3B2Fqw.png" alt="" width="880" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;d’où ce fichier YAML pour la configuration :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;ubuntu@k3s1:~$ cat config.yaml&lt;/span&gt; 

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bgp-config&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;config.yaml&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;peers:&lt;/span&gt;
      &lt;span class="s"&gt;- peer-address: 10.26.23.1&lt;/span&gt;
        &lt;span class="s"&gt;peer-asn: 64512&lt;/span&gt;
        &lt;span class="s"&gt;my-asn: 64512&lt;/span&gt;
    &lt;span class="s"&gt;address-pools:&lt;/span&gt;
      &lt;span class="s"&gt;- name: default&lt;/span&gt;
        &lt;span class="s"&gt;protocol: bgp&lt;/span&gt;
        &lt;span class="s"&gt;addresses:&lt;/span&gt;
          &lt;span class="s"&gt;- 10.26.23.32/27&lt;/span&gt;

&lt;span class="s"&gt;ubuntu@k3s1:~$ kubectl apply -f config.yaml&lt;/span&gt; 
&lt;span class="s"&gt;configmap/bgp-config created&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Installation du Pod Cilium avec Helm :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;cilium cilium/cilium &lt;span class="nt"&gt;--version&lt;/span&gt; 1.13.0 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; kube-system &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; bgp.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; bgp.announce.loadbalancerIP&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; bgp.announce.podCIDR&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true
&lt;/span&gt;NAME: cilium
LAST DEPLOYED: Wed Mar 1 20:05:18 2023
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble.

Your release version is 1.13.0.

For any further &lt;span class="nb"&gt;help&lt;/span&gt;, visit https://docs.cilium.io/en/v1.13/gettinghelp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Mise à jour de cette installation avec l’insertion de Cilium Hubble (avec une interface utilisateur de Hubble pouvant accéder à la carte graphique des services):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.cilium.io/en/v1.13/gettingstarted/hubble/"&gt;Service Map &amp;amp; Hubble UI - Cilium 1.13.0 documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://isovalent.com/blog/post/tutorial-tips-and-tricks-to-install-cilium/"&gt;Tutorial: Tips and Tricks to install Cilium - Isovalent&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;helm upgrade cilium cilium/cilium &lt;span class="nt"&gt;--version&lt;/span&gt; 1.13.0 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--namespace&lt;/span&gt; kube-system &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; bgp.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; bgp.announce.loadbalancerIP&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; bgp.announce.podCIDR&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; sctp.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; hubble.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; hubble.metrics.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{dns,drop,tcp,flow,icmp,http}"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; hubble.relay.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; hubble.ui.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; hubble.ui.service.type&lt;span class="o"&gt;=&lt;/span&gt;LoadBalancer &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; hubble.relay.service.type&lt;span class="o"&gt;=&lt;/span&gt;LoadBalancer
Release &lt;span class="s2"&gt;"cilium"&lt;/span&gt; has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Wed Mar 1 20:09:00 2023
NAMESPACE: kube-system
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.

Your release version is 1.13.0.

For any further &lt;span class="nb"&gt;help&lt;/span&gt;, visit https://docs.cilium.io/en/v1.13/gettinghelp

root@localhost:~# lxc &lt;span class="nb"&gt;ls&lt;/span&gt;
+------+---------+--------------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+---------+--------------------------+------+-----------+-----------+
| k3s1 | RUNNING | 10.26.23.163 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | | CONTAINER | 0 |
| | | 10.0.2.99 &lt;span class="o"&gt;(&lt;/span&gt;cilium_host&lt;span class="o"&gt;)&lt;/span&gt; | | | |
+------+---------+--------------------------+------+-----------+-----------+
| k3s2 | RUNNING | 10.26.23.75 &lt;span class="o"&gt;(&lt;/span&gt;eth0&lt;span class="o"&gt;)&lt;/span&gt; | | CONTAINER | 0 |
| | | 10.0.0.117 &lt;span class="o"&gt;(&lt;/span&gt;cilium_host&lt;span class="o"&gt;)&lt;/span&gt; | | | |
+------+---------+--------------------------+------+-----------+-----------+
| k3s3 | RUNNING | 10.0.1.16 &lt;span class="o"&gt;(&lt;/span&gt;cilium_host&lt;span class="o"&gt;)&lt;/span&gt; | | CONTAINER | 0 |
+------+---------+--------------------------+------+-----------+-----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TFwsZkNg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/373/0%2AIwQqHNenQKBU5cqB.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TFwsZkNg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/373/0%2AIwQqHNenQKBU5cqB.jpg" alt="" width="373" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dès lors, test avec le déploiement encore une fois du sempiternel démonstrateur FC :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;ubuntu@k3s1:~$ cat &amp;lt;&amp;lt;EOF | kubectl apply -f -&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fcdemo3&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fcdemo3&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fcdemo3&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fcdemo3&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fcdemo3&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mcas/franceconnect-demo2:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fcdemo-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fcdemo3&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
&lt;span class="s"&gt;EOF&lt;/span&gt;
&lt;span class="s"&gt;deployment.apps/fcdemo3 created&lt;/span&gt;
&lt;span class="s"&gt;service/fcdemo-service created&lt;/span&gt;

&lt;span class="s"&gt;ubuntu@k3s1:~$ kubectl get po,svc&lt;/span&gt;
&lt;span class="s"&gt;NAME READY STATUS RESTARTS AGE&lt;/span&gt;
&lt;span class="s"&gt;pod/fcdemo3-6bff77544b-n8x5j 1/1 Running 0 56s&lt;/span&gt;
&lt;span class="s"&gt;pod/fcdemo3-6bff77544b-n7shv 1/1 Running 0 56s&lt;/span&gt;
&lt;span class="s"&gt;pod/fcdemo3-6bff77544b-z6s44 1/1 Running 0 56s&lt;/span&gt;
&lt;span class="s"&gt;pod/fcdemo3-6bff77544b-wmcrg 1/1 Running 0 56s&lt;/span&gt;

&lt;span class="s"&gt;NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE&lt;/span&gt;
&lt;span class="s"&gt;service/kubernetes ClusterIP 10.43.0.1 &amp;lt;none&amp;gt; 443/TCP 54m&lt;/span&gt;
&lt;span class="s"&gt;service/fcdemo-service LoadBalancer 10.43.212.198 10.26.23.35 80:30918/TCP 56s&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Et il est en effet possible à ce stade d’accéder au démonstrateur avec l’adresse IP fournie avec Cilium BGP :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;http http://10.26.23.35
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 2740
Content-Type: text/html&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;charset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;utf-8
Date: Wed, 01 Mar 2023 20:39:41 GMT
ETag: W/&lt;span class="s2"&gt;"ab4-v8wreE8WoQJ/5EqjkvRXgbacjz4"&lt;/span&gt;
X-Powered-By: Express
set-cookie: connect.sid&lt;span class="o"&gt;=&lt;/span&gt;s%3AiC15aGdDdBtHWW7zrIjqNiUw0Kq93-W6.WBQcxUEQIloR1IqQTlWXFRUOijoLNsdvAq4bVcSKOso&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;Path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/&lt;span class="p"&gt;;&lt;/span&gt; HttpOnly

&amp;lt;&lt;span class="o"&gt;!&lt;/span&gt;doctype html&amp;gt;
&amp;lt;html &lt;span class="nv"&gt;lang&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"en"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&amp;lt;&lt;span class="nb"&gt;head&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;meta &lt;span class="nv"&gt;charset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"UTF-8"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;meta &lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"viewport"&lt;/span&gt;
          &lt;span class="nv"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;meta http-equiv&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"X-UA-Compatible"&lt;/span&gt; &lt;span class="nv"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ie=edge"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;&lt;span class="nb"&gt;link &lt;/span&gt;&lt;span class="nv"&gt;rel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"stylesheet"&lt;/span&gt; &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.1/css/bulma.min.css"&lt;/span&gt; &lt;span class="nv"&gt;integrity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"sha256-zIG416V1ynj3Wgju/scU80KAEWOsO5rRLfVyRDuOv7Q="&lt;/span&gt; &lt;span class="nv"&gt;crossorigin&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"anonymous"&lt;/span&gt; /&amp;gt;
    &amp;lt;title&amp;gt;Démonstrateur Fournisseur de Service&amp;lt;/title&amp;gt;
&amp;lt;/head&amp;gt;

&amp;lt;body&amp;gt;
&amp;lt;nav &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"navbar"&lt;/span&gt; &lt;span class="nv"&gt;role&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"navigation"&lt;/span&gt; aria-label&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"main navigation"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"navbar-start"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"navbar-brand"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
            &amp;lt;a &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"navbar-item"&lt;/span&gt; &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
                &amp;lt;img &lt;span class="nv"&gt;src&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/img/fc_logo_v2.png"&lt;/span&gt; &lt;span class="nv"&gt;alt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Démonstrateur Fournisseur de Service"&lt;/span&gt; &lt;span class="nv"&gt;height&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"28"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
            &amp;lt;/a&amp;gt;
        &amp;lt;/div&amp;gt;
        &amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/"&lt;/span&gt; &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"navbar-item"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
            Home
        &amp;lt;/a&amp;gt;
    &amp;lt;/div&amp;gt;
    &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"navbar-end"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"navbar-item"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;

                &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"buttons"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
                    &amp;lt;a &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"button is-light"&lt;/span&gt; &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/login"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;Se connecter&amp;lt;/a&amp;gt;
                &amp;lt;/div&amp;gt;

        &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
&amp;lt;/nav&amp;gt;

&amp;lt;section &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"hero is-info is-medium"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"hero-body"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"container"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
            &amp;lt;h1 &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"title"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
                Bienvenue sur le démonstrateur de fournisseur de service
            &amp;lt;/h1&amp;gt;
            &amp;lt;h2 &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"subtitle"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
                Cliquez sur &lt;span class="s2"&gt;"se connecter"&lt;/span&gt; pour vous connecter via &amp;lt;strong&amp;gt;FranceConnect&amp;lt;/strong&amp;gt;
            &amp;lt;/h2&amp;gt;
        &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
&amp;lt;/section&amp;gt;

&amp;lt;section &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"section is-small"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"container"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &amp;lt;h1 &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"title"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;Récupérer vos données via FranceConnect&amp;lt;/h1&amp;gt;

        &amp;lt;p&amp;gt;Pour récupérer vos données via &amp;lt;strong&amp;gt;FranceConnect&amp;lt;/strong&amp;gt; cliquez sur le bouton ci-dessous&amp;lt;/p&amp;gt;
    &amp;lt;/div&amp;gt;
&amp;lt;/section&amp;gt;
&amp;lt;section &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"section is-small"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"container has-text-centered"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &amp;lt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nt"&gt;--&lt;/span&gt; FC btn &lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/data"&lt;/span&gt; &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"button is-link"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;Récupérer mes données via FranceConnect&amp;lt;/a&amp;gt;
    &amp;lt;/div&amp;gt;
&amp;lt;/section&amp;gt;
&amp;lt;footer &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"footer custom-content"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"content has-text-centered"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &amp;lt;p&amp;gt;
            &amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://partenaires.franceconnect.gouv.fr/fcp/fournisseur-service"&lt;/span&gt;
               &lt;span class="nv"&gt;target&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"_blank"&lt;/span&gt;
               &lt;span class="nv"&gt;alt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"lien vers la documentation France Connect"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
                &amp;lt;strong&amp;gt;Documentation FranceConnect Partenaires&amp;lt;/strong&amp;gt;
            &amp;lt;/a&amp;gt;
        &amp;lt;/p&amp;gt;
    &amp;lt;/div&amp;gt;
&amp;lt;/footer&amp;gt;
&amp;lt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nt"&gt;--&lt;/span&gt; This script brings the FranceConnect tools modal which &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="s2"&gt;"disconnect"&lt;/span&gt;, &lt;span class="s2"&gt;"see connection history"&lt;/span&gt; and &lt;span class="s2"&gt;"see FC FAQ"&lt;/span&gt; features &lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&amp;lt;script &lt;span class="nv"&gt;src&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://fcp.integ01.dev-franceconnect.fr/js/franceconnect.js"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;/script&amp;gt;

&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pour plus de clarté, exposition via Ngrok :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ngrok.com/download"&gt;ngrok - download&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; https://ngrok-agent.s3.amazonaws.com/ngrok.asc | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/trusted.gpg.d/ngrok.asc &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb https://ngrok-agent.s3.amazonaws.com buster main"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/ngrok.list &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;ngrok

ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;ngrok http 10.26.23.35:80

ngrok &lt;span class="o"&gt;(&lt;/span&gt;Ctrl+C to quit&lt;span class="o"&gt;)&lt;/span&gt;

Add Okta or Azure to protect your ngrok dashboard with SSO: https://ngrok.com/dashSSO                                                                                              

Session Status online                                                                                                                                               
Account eddi now &lt;span class="o"&gt;(&lt;/span&gt;Plan: Free&lt;span class="o"&gt;)&lt;/span&gt;                                                                                                                                
Version 3.1.1                                                                                                                                                
Region Europe &lt;span class="o"&gt;(&lt;/span&gt;eu&lt;span class="o"&gt;)&lt;/span&gt;                                                                                                                                          
Latency -                                                                                                                                                    
Web Interface http://127.0.0.1:4040                                                                                                                                
Forwarding https://f1e1-172-105-130-186.eu.ngrok.io -&amp;gt; http://10.26.23.35:80                                                                                    

Connections ttl opn rt1 rt5 p50 p90                                                                                                          
                              0 0 0.00 0.00 0.00 0.00  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AOZEfTw---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A2wTYFmckkugU5tvScw9nSg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AOZEfTw---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A2wTYFmckkugU5tvScw9nSg.png" alt="" width="880" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TY0dAQ9s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AFA6OQ_K1CHvwUn8BxG3Y-Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TY0dAQ9s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AFA6OQ_K1CHvwUn8BxG3Y-Q.png" alt="" width="880" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1VTxrk9S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2APBSUifmNDDVy5_vQdOJTuA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1VTxrk9S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2APBSUifmNDDVy5_vQdOJTuA.png" alt="" width="880" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F6EOJGY7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A-5--8ahDQKp7ng4BHV2f5Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F6EOJGY7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A-5--8ahDQKp7ng4BHV2f5Q.png" alt="" width="880" height="589"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yYV8WLYx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A4O6xnH-Xf99QvugxgA_RiA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yYV8WLYx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A4O6xnH-Xf99QvugxgA_RiA.png" alt="" width="880" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jO-uVE29--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AaYQOaTVfyMLqrjHg6PTF3w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jO-uVE29--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AaYQOaTVfyMLqrjHg6PTF3w.png" alt="" width="880" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Un petit regard côté observabilité dans Cilium avec Hubble et son interface graphique fournie également en mode LoadBalancer :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc &lt;span class="nt"&gt;-A&lt;/span&gt;
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt; AGE
default kubernetes ClusterIP 10.43.0.1 &amp;lt;none&amp;gt; 443/TCP 72m
kube-system kube-dns ClusterIP 10.43.0.10 &amp;lt;none&amp;gt; 53/UDP,53/TCP,9153/TCP 72m
kube-system metrics-server ClusterIP 10.43.252.18 &amp;lt;none&amp;gt; 443/TCP 72m
kube-system hubble-peer ClusterIP 10.43.92.181 &amp;lt;none&amp;gt; 443/TCP 49m
kube-system traefik LoadBalancer 10.43.155.56 10.26.23.32 80:30264/TCP,443:31266/TCP 48m
kube-system hubble-relay LoadBalancer 10.43.128.151 10.26.23.33 80:31405/TCP 45m
kube-system hubble-metrics ClusterIP None &amp;lt;none&amp;gt; 9965/TCP 45m
kube-system hubble-ui LoadBalancer 10.43.66.116 10.26.23.34 80:30183/TCP 45m
default fcdemo-service LoadBalancer 10.43.212.198 10.26.23.35 80:30918/TCP 19m 

root@localhost:~# ssh &lt;span class="nt"&gt;-L&lt;/span&gt; 0.0.0.0:80:10.43.66.116:80 ubuntu@10.26.23.163
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bu88pxH0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AlKKMOBivgzXiyZ7Ca_6v4A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bu88pxH0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AlKKMOBivgzXiyZ7Ca_6v4A.png" alt="" width="880" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GDSHPwKs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AE4MN4MTQpZXo8amujHKpbA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GDSHPwKs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AE4MN4MTQpZXo8amujHKpbA.png" alt="" width="880" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tI76yfmZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Agy8_zAXhjHbuemwKG9v2sA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tI76yfmZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Agy8_zAXhjHbuemwKG9v2sA.png" alt="" width="880" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Il faut noter qu’il est également offert la possibilité de configurer l’équilibrage de charge par proxy pour les services Kubernetes en utilisant Cilium.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ce qui est utile pour des cas d’utilisation tels que l’équilibrage de charge gRPC.&lt;/p&gt;

&lt;p&gt;Une fois activé, le trafic vers un service Kubernetes sera redirigé vers un proxy Envoy géré par Cilium pour l’équilibrage de charge. Cette fonctionnalité est indépendante de la fonctionnalité Kubernetes Ingress Support (Proxy Load Balancing for Kubernetes Services (beta)) :&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://docs.cilium.io/en/latest/network/servicemesh/envoy-load-balancing/"&gt;Proxy Load Balancing for Kubernetes Services (beta) - Cilium 1.13.0 documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Test avec ce déploiement comprenant notamment des charges de travail de test consistant en un client de déploiement et un service echo-service avec deux Pods en backend :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/cilium/cilium/HEAD/examples/kubernetes/servicemesh/envoy/test-application-proxy-loadbalancing.yaml
configmap/coredns-configmap created
deployment.apps/client created
deployment.apps/echo-service created
service/echo-service created

ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;watch &lt;span class="nt"&gt;-c&lt;/span&gt; kubectl get po,svc &lt;span class="nt"&gt;-A&lt;/span&gt;
ubuntu@k3s1:~&lt;span class="nv"&gt;$ CLIENT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;client &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.items[0].metadata.name}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nv"&gt;$CLIENT&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; curl &lt;span class="nt"&gt;-v&lt;/span&gt; echo-service:8080/

&lt;span class="k"&gt;*&lt;/span&gt; Trying 10.43.199.92:8080...
&lt;span class="k"&gt;*&lt;/span&gt; Connected to echo-service &lt;span class="o"&gt;(&lt;/span&gt;10.43.199.92&lt;span class="o"&gt;)&lt;/span&gt; port 8080 &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="c"&gt;#0)&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; GET / HTTP/1.1
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Host: echo-service:8080
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; User-Agent: curl/7.83.1
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Accept: &lt;span class="k"&gt;*&lt;/span&gt;/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; 
&lt;span class="k"&gt;*&lt;/span&gt; Mark bundle as not supporting multiuse
&amp;lt; HTTP/1.1 200 OK
&amp;lt; X-Powered-By: Express
&amp;lt; Vary: Origin, Accept-Encoding
&amp;lt; Access-Control-Allow-Credentials: &lt;span class="nb"&gt;true&lt;/span&gt;
&amp;lt; Accept-Ranges: bytes
&amp;lt; Cache-Control: public, max-age&lt;span class="o"&gt;=&lt;/span&gt;0
&amp;lt; Last-Modified: Wed, 21 Sep 2022 10:25:56 GMT
&amp;lt; ETag: W/&lt;span class="s2"&gt;"809-1835f952f20"&lt;/span&gt;
&amp;lt; Content-Type: text/html&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;charset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;UTF-8
&amp;lt; Content-Length: 2057
&amp;lt; Date: Wed, 01 Mar 2023 21:12:11 GMT
&amp;lt; Connection: keep-alive
&amp;lt; Keep-Alive: &lt;span class="nb"&gt;timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5
&amp;lt; 
&amp;lt;html&amp;gt;
  &amp;lt;&lt;span class="nb"&gt;head&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;&lt;span class="nb"&gt;link
      &lt;/span&gt;&lt;span class="nv"&gt;rel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"stylesheet"&lt;/span&gt;
      &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://use.fontawesome.com/releases/v5.8.2/css/all.css"&lt;/span&gt;
      &lt;span class="nv"&gt;integrity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"sha384-oS3vJWv+0UjzBfQzYUhtDYW+Pj2yciDJxpsK1OYPAYjqT085Qq/1cq5FLXAZQ7Ay"&lt;/span&gt;
      &lt;span class="nv"&gt;crossorigin&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"anonymous"&lt;/span&gt;
    /&amp;gt;
    &amp;lt;&lt;span class="nb"&gt;link &lt;/span&gt;&lt;span class="nv"&gt;rel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"stylesheet"&lt;/span&gt; &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"style.css"&lt;/span&gt; /&amp;gt;
    &amp;lt;title&amp;gt;JSON Server&amp;lt;/title&amp;gt;
  &amp;lt;/head&amp;gt;

  &amp;lt;body&amp;gt;
    &amp;lt;header&amp;gt;
      &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"container"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &amp;lt;nav&amp;gt;
          &amp;lt;ul&amp;gt;
            &amp;lt;li &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"title"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
              JSON Server
            &amp;lt;/li&amp;gt;
            &amp;lt;li&amp;gt;
              &amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://github.com/users/typicode/sponsorship"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
                &amp;lt;i &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"fas fa-heart"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;/i&amp;gt;GitHub Sponsors
              &amp;lt;/a&amp;gt;
            &amp;lt;/li&amp;gt;
            &amp;lt;li&amp;gt;
              &amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://my-json-server.typicode.com"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
                &amp;lt;i &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"fas fa-burn"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;/i&amp;gt;My JSON Server
              &amp;lt;/a&amp;gt;
            &amp;lt;/li&amp;gt;
            &amp;lt;li&amp;gt;
              &amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://thanks.typicode.com"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
                &amp;lt;i &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"far fa-laugh"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;/i&amp;gt;Supporters
              &amp;lt;/a&amp;gt;
            &amp;lt;/li&amp;gt;
          &amp;lt;/ul&amp;gt;
        &amp;lt;/nav&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/header&amp;gt;
    &amp;lt;main&amp;gt;
      &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"container"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &amp;lt;h1&amp;gt;Congrats!&amp;lt;/h1&amp;gt;
        &amp;lt;p&amp;gt;
          You&lt;span class="s1"&gt;'re successfully running JSON Server
          &amp;lt;br /&amp;gt;
          ✧*｡٩(ˊᗜˋ*)و✧*｡
        &amp;lt;/p&amp;gt;

        &amp;lt;div id="resources"&amp;gt;&amp;lt;/div&amp;gt;

        &amp;lt;p&amp;gt;
          To access and modify resources, you can use any HTTP method:
        &amp;lt;/p&amp;gt;
        &amp;lt;p&amp;gt;
          &amp;lt;code&amp;gt;GET&amp;lt;/code&amp;gt;
          &amp;lt;code&amp;gt;POST&amp;lt;/code&amp;gt;
          &amp;lt;code&amp;gt;PUT&amp;lt;/code&amp;gt;
          &amp;lt;code&amp;gt;PATCH&amp;lt;/code&amp;gt;
          &amp;lt;code&amp;gt;DELETE&amp;lt;/code&amp;gt;
          &amp;lt;code&amp;gt;OPTIONS&amp;lt;/code&amp;gt;
        &amp;lt;/p&amp;gt;

        &amp;lt;div id="custom-routes"&amp;gt;&amp;lt;/div&amp;gt;

        &amp;lt;h1&amp;gt;Documentation&amp;lt;/h1&amp;gt;
        &amp;lt;p&amp;gt;
          &amp;lt;a href="https://github.com/typicode/json-server"&amp;gt;
            README
          &amp;lt;/a&amp;gt;
        &amp;lt;/p&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/main&amp;gt;

    &amp;lt;footer&amp;gt;
      &amp;lt;div class="container"&amp;gt;
        &amp;lt;p&amp;gt;
          To replace this page, create a
          &amp;lt;code&amp;gt;./public/index.html&amp;lt;/code&amp;gt; file.
        &amp;lt;/p&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/footer&amp;gt;

    &amp;lt;script src="script.js"&amp;gt;&amp;lt;/script&amp;gt;
  &amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
* Connection #0 to host echo-service left intact
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;L’ajout d’une politique en couche 7 introduit le proxy Envoy dans le chemin de ce trafic. La requête passant maintenant par le proxy Envoy et se dirigeant ensuite vers le backend …&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl annotate service echo-service service.cilium.io/lb-l7&lt;span class="o"&gt;=&lt;/span&gt;enabled
service/echo-service annotated

ubuntu@k3s1:~&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nv"&gt;$CLIENT&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; curl &lt;span class="nt"&gt;-v&lt;/span&gt; echo-service:8080/
&lt;span class="k"&gt;*&lt;/span&gt; Trying 10.43.199.92:8080...
&lt;span class="k"&gt;*&lt;/span&gt; Connected to echo-service &lt;span class="o"&gt;(&lt;/span&gt;10.43.199.92&lt;span class="o"&gt;)&lt;/span&gt; port 8080 &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="c"&gt;#0)&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; GET / HTTP/1.1
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Host: echo-service:8080
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; User-Agent: curl/7.83.1
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Accept: &lt;span class="k"&gt;*&lt;/span&gt;/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; 
&lt;span class="k"&gt;*&lt;/span&gt; Mark bundle as not supporting multiuse
&amp;lt; HTTP/1.1 200 OK
&amp;lt; X-Powered-By: Express
&amp;lt; Vary: Origin, Accept-Encoding
&amp;lt; Access-Control-Allow-Credentials: &lt;span class="nb"&gt;true&lt;/span&gt;
&amp;lt; Accept-Ranges: bytes
&amp;lt; Cache-Control: public, max-age&lt;span class="o"&gt;=&lt;/span&gt;0
&amp;lt; Last-Modified: Wed, 21 Sep 2022 10:25:56 GMT
&amp;lt; ETag: W/&lt;span class="s2"&gt;"809-1835f952f20"&lt;/span&gt;
&amp;lt; Content-Type: text/html&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;charset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;UTF-8
&amp;lt; Content-Length: 2057
&amp;lt; Date: Wed, 01 Mar 2023 21:12:33 GMT
&amp;lt; Connection: keep-alive
&amp;lt; Keep-Alive: &lt;span class="nb"&gt;timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5
&amp;lt; 
&amp;lt;html&amp;gt;
  &amp;lt;&lt;span class="nb"&gt;head&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;&lt;span class="nb"&gt;link
      &lt;/span&gt;&lt;span class="nv"&gt;rel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"stylesheet"&lt;/span&gt;
      &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://use.fontawesome.com/releases/v5.8.2/css/all.css"&lt;/span&gt;
      &lt;span class="nv"&gt;integrity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"sha384-oS3vJWv+0UjzBfQzYUhtDYW+Pj2yciDJxpsK1OYPAYjqT085Qq/1cq5FLXAZQ7Ay"&lt;/span&gt;
      &lt;span class="nv"&gt;crossorigin&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"anonymous"&lt;/span&gt;
    /&amp;gt;
    &amp;lt;&lt;span class="nb"&gt;link &lt;/span&gt;&lt;span class="nv"&gt;rel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"stylesheet"&lt;/span&gt; &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"style.css"&lt;/span&gt; /&amp;gt;
    &amp;lt;title&amp;gt;JSON Server&amp;lt;/title&amp;gt;
  &amp;lt;/head&amp;gt;

  &amp;lt;body&amp;gt;
    &amp;lt;header&amp;gt;
      &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"container"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &amp;lt;nav&amp;gt;
          &amp;lt;ul&amp;gt;
            &amp;lt;li &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"title"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
              JSON Server
            &amp;lt;/li&amp;gt;
            &amp;lt;li&amp;gt;
              &amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://github.com/users/typicode/sponsorship"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
                &amp;lt;i &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"fas fa-heart"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;/i&amp;gt;GitHub Sponsors
              &amp;lt;/a&amp;gt;
            &amp;lt;/li&amp;gt;
            &amp;lt;li&amp;gt;
              &amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://my-json-server.typicode.com"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
                &amp;lt;i &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"fas fa-burn"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;/i&amp;gt;My JSON Server
              &amp;lt;/a&amp;gt;
            &amp;lt;/li&amp;gt;
            &amp;lt;li&amp;gt;
              &amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://thanks.typicode.com"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
                &amp;lt;i &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"far fa-laugh"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;/i&amp;gt;Supporters
              &amp;lt;/a&amp;gt;
            &amp;lt;/li&amp;gt;
          &amp;lt;/ul&amp;gt;
        &amp;lt;/nav&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/header&amp;gt;
    &amp;lt;main&amp;gt;
      &amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"container"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &amp;lt;h1&amp;gt;Congrats!&amp;lt;/h1&amp;gt;
        &amp;lt;p&amp;gt;
          You&lt;span class="s1"&gt;'re successfully running JSON Server
          &amp;lt;br /&amp;gt;
          ✧*｡٩(ˊᗜˋ*)و✧*｡
        &amp;lt;/p&amp;gt;

        &amp;lt;div id="resources"&amp;gt;&amp;lt;/div&amp;gt;

        &amp;lt;p&amp;gt;
          To access and modify resources, you can use any HTTP method:
        &amp;lt;/p&amp;gt;
        &amp;lt;p&amp;gt;
          &amp;lt;code&amp;gt;GET&amp;lt;/code&amp;gt;
          &amp;lt;code&amp;gt;POST&amp;lt;/code&amp;gt;
          &amp;lt;code&amp;gt;PUT&amp;lt;/code&amp;gt;
          &amp;lt;code&amp;gt;PATCH&amp;lt;/code&amp;gt;
          &amp;lt;code&amp;gt;DELETE&amp;lt;/code&amp;gt;
          &amp;lt;code&amp;gt;OPTIONS&amp;lt;/code&amp;gt;
        &amp;lt;/p&amp;gt;

        &amp;lt;div id="custom-routes"&amp;gt;&amp;lt;/div&amp;gt;

        &amp;lt;h1&amp;gt;Documentation&amp;lt;/h1&amp;gt;
        &amp;lt;p&amp;gt;
          &amp;lt;a href="https://github.com/typicode/json-server"&amp;gt;
            README
          &amp;lt;/a&amp;gt;
        &amp;lt;/p&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/main&amp;gt;

    &amp;lt;footer&amp;gt;
      &amp;lt;div class="container"&amp;gt;
        &amp;lt;p&amp;gt;
          To replace this page, create a
          &amp;lt;code&amp;gt;./public/index.html&amp;lt;/code&amp;gt; file.
        &amp;lt;/p&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/footer&amp;gt;

    &amp;lt;script src="script.js"&amp;gt;&amp;lt;/script&amp;gt;
  &amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
* Connection #0 to host echo-service left intact
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--43jL5yox--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A4lpScjJ1E6QZ27bvt6bsOA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--43jL5yox--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A4lpScjJ1E6QZ27bvt6bsOA.png" alt="" width="880" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cilium et kube-router coopèrent pour utiliser kube-router pour le peering BGP et la propagation des routes et Cilium pour l’application des politiques et l’équilibrage des charges.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.cilium.io/en/stable/network/kube-router/"&gt;Using Kube-Router to Run BGP - Cilium 1.13.0 documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ou de l’utilisation conjointe de BIRD :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.cilium.io/en/stable/network/bird/"&gt;Using BIRD to run BGP - Cilium 1.13.0 documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Le routage à travers les clusters Kubernetes est offert avec Cluster Mesh :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://isovalent.com/blog/post/topology-aware-routing-and-service-mesh-across-clusters-with-cluster-mesh/"&gt;Topology Aware Routing across Clusters with Cluster Mesh&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2jNGonHP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AsZ42xax8KYQmcXZt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2jNGonHP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AsZ42xax8KYQmcXZt.png" alt="" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Récapitulatif des nouveautés induites dans la dernière version de Cilium et quelques liens supplémentaires …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://isovalent.com/blog/post/cilium-release-113/"&gt;Cilium 1.13 - Gateway API, mTLS datapath, Service Mesh, BIG TCP, SBOM, SNI NetworkPolicy, ... - Isovalent&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Gfnmeafg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/593/0%2AV_R__t6pxI9qFBpD.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Gfnmeafg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/593/0%2AV_R__t6pxI9qFBpD.jpg" alt="" width="593" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>networking</category>
      <category>loadbalancing</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
