<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ridae HAMDANI</title>
    <description>The latest articles on Forem by Ridae HAMDANI (@ridaehamdani).</description>
    <link>https://forem.com/ridaehamdani</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ridaehamdani"/>
    <language>en</language>
    <item>
      <title>Master Disk I/O Metrics Monitoring</title>
      <dc:creator>Ridae HAMDANI</dc:creator>
      <pubDate>Mon, 30 Jun 2025 13:14:00 +0000</pubDate>
      <link>https://forem.com/ridaehamdani/master-disk-io-metrics-monitoring-1ddc</link>
      <guid>https://forem.com/ridaehamdani/master-disk-io-metrics-monitoring-1ddc</guid>
      <description>&lt;p&gt;Disk IOPS (input/output per second) stats are metrics that gives you insights  on how your system interact with your storage devices. These metrics measure read and write operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fundamental Disk IOPS metrics
&lt;/h2&gt;

&lt;h3&gt;
  
  
  IOPS  (Input/Output Per Second)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8losgvjs4j6wc2dxc589.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8losgvjs4j6wc2dxc589.png" alt="Grafana disk I/O visualisation" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;IOPS stats measures how many Input(write)/Output(read) transaction your storage devices perform per second.&lt;/p&gt;

&lt;p&gt;IOPS metrics are  exported by node exporter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;irate(node_disk_reads_completed_total[$__rate_interval])
irate(node_disk_writes_completed_total[$__rate_interval])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;IOPS depends on your disk but a higher number generally mean better performance, but a context matter , IOPS metrics is meaningless without a latencey figure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;High IOPS but low latency = healthy&lt;/li&gt;
&lt;li&gt;Low IOPS + high latency = disk contention&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Latency (Average time per operation)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9k2bx1chbamzl3k42usd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9k2bx1chbamzl3k42usd.png" alt="Grafana disk latency visualisation" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Latency represents the time it takes for the I/O request to be completed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Latency = Queue time + processing (READ or Write) operation.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Latency is the most important metrics to consider for a storage performance. Lower latency means better performance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;irate(node_disk_write_time_seconds_total[$__rate_interval])/irate(node_disk_writes_completed_total[$__rate_interval]) # Read Latency
irate(node_disk_write_time_seconds_total[$__rate_interval])/irate(node_disk_writes_completed_total[$__rate_interval]) # Write Latency
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Utilisation
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1h8440q8p5ngsdr1q0i6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1h8440q8p5ngsdr1q0i6.png" alt="Grafana disk Utilisation visualisation" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Disk Utilisation is the percentage of time the disk busy processing I/O operations.&lt;/p&gt;

&lt;p&gt;High disk I/O time utilisation (e.g: &amp;gt; 80%) means te disk is busy almost all the time handling I/O requests. Technically, it means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Slow or failing hardware&lt;/li&gt;
&lt;li&gt;Concurrent access bottlenecks&lt;/li&gt;
&lt;li&gt;High I/O workload&lt;/li&gt;
&lt;li&gt;Insefficient access patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If latency is also high, this means the disk can't keep up , which leads to performance degradation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rate(node_disk_io_time_seconds_total[1m]) * 100
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;High utilization + high latency = saturated disk
### Throughput or Bandwidth&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fro327t12w6zf1cws10lr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fro327t12w6zf1cws10lr.png" alt="Grafana disk bandwidth visualisation" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Troughput or Bandwidth represents the amount of data in Megabytes per second (MB/s) transferred to or from the storage device.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;irate(node_disk_written_bytes_tota[$__rate_interval]) 
irate(node_disk_read_bytes_total[$__rate_interval])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is how to &lt;em&gt;interpret&lt;/em&gt; the bandwidth metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If your read/write MB/s is &lt;strong&gt;close to the specs&lt;/strong&gt; of your disk, &lt;strong&gt;it’s OK&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;If you see &lt;strong&gt;very low bandwidth&lt;/strong&gt; but &lt;strong&gt;high I/O wait&lt;/strong&gt; or &lt;strong&gt;high disk utilization&lt;/strong&gt;, the disk might be overloaded on &lt;strong&gt;small random reads&lt;/strong&gt; (IOPS problem, not bandwidth).&lt;/li&gt;
&lt;li&gt;If both bandwidth and IOPS are high, the disk is &lt;strong&gt;at full load&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>monitoring</category>
      <category>sre</category>
      <category>devops</category>
      <category>linux</category>
    </item>
    <item>
      <title>Optimizing GitLab CI for Readability and Maintainability: From 1K to 600 Lines!</title>
      <dc:creator>Ridae HAMDANI</dc:creator>
      <pubDate>Fri, 26 Jan 2024 10:36:11 +0000</pubDate>
      <link>https://forem.com/worldlinetech/optimizing-gitlab-ci-for-readability-and-maintainability-from-1k-to-600-lines-9bp</link>
      <guid>https://forem.com/worldlinetech/optimizing-gitlab-ci-for-readability-and-maintainability-from-1k-to-600-lines-9bp</guid>
      <description>&lt;p&gt;This short article aims to &lt;strong&gt;share some tips I learnt from optimizing our GitLab CI file&lt;/strong&gt; (&lt;code&gt;.gitlab-ci.yaml&lt;/code&gt;) for building multiple Docker images.&lt;/p&gt;

&lt;p&gt;By implementing these techniques, you’ll not only &lt;strong&gt;make your GitLab CI files more readable&lt;/strong&gt; but also set the stage for a more agile and adaptable CI/CD environment.&lt;br&gt;
Let’s delve into the practical insights that can elevate your DevOps experience and contribute to a more efficient and maintainable codebase.&lt;/p&gt;
&lt;h2&gt;
  
  
  1- Use default runner tag and override it when needed:
&lt;/h2&gt;

&lt;p&gt;In case you are using a standard or a generic runner for the majority of your jobs and a a different runner for a few jobs, consider defining a global runner tag instead of defining one for each job.&lt;br&gt;
You can then override the global tag when necessary.&lt;/p&gt;

&lt;p&gt;❌ Don’t do:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;compile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;...&lt;/span&gt;
  &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;myruuner&lt;/span&gt;
&lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;...&lt;/span&gt;
  &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;testrunner&lt;/span&gt;  
&lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;...&lt;/span&gt;
  &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;myrunner&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Do:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;myrunner&lt;/span&gt;

&lt;span class="na"&gt;compile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;...&lt;/span&gt;

&lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;...&lt;/span&gt;
  &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;testrunner&lt;/span&gt;  
&lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2- Use &lt;code&gt;parallel:matrix&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;GitLab has a powerful CI feature to run a matrix of jobs in parallel.&lt;br&gt;
In our case we had multiple jobs to test every customized Docker JDK image and ensure that it does not break a standard maven build.&lt;br&gt;
Taking advantage of the keyword &lt;code&gt;parallel:matrix&lt;/code&gt;, can save you multiple lines of code and make your CI files more readable and easier to maintain.&lt;/p&gt;

&lt;p&gt;Here is an example of refactoring our test jobs:&lt;/p&gt;

&lt;p&gt;❌ Instead of configuring multiple job with the same script part, for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;test-mvn-jdk-11&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;openjdk-11&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-image&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mvn -V compile ...&lt;/span&gt;

&lt;span class="na"&gt;test-mvn-jdk-17&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;openjdk-17&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-image&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mvn -V compile ...&lt;/span&gt;

&lt;span class="na"&gt;test-mvn-jdk-21&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;openjdk-21&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-image&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mvn -V compile ...&lt;/span&gt;
&lt;span class="s"&gt;....&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ You can easily replace them with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;test-mvn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$JAVA_IMAGE&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-image&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mvn -V compile ...&lt;/span&gt;
  &lt;span class="na"&gt;parallel&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matrix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;JAVA_IMAGE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;openjdk-11&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;&lt;span class="nv"&gt;openjdk-17&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;&lt;span class="nv"&gt;openjdk-21&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will also make it easier to incorporate additional JDK versions in the future, requiring minimal adjustments and eliminating the need for extensive line changes or introducing new jobs.&lt;/p&gt;

&lt;h2&gt;
  
  
  3- Use rules to define variables:
&lt;/h2&gt;

&lt;p&gt;GitLab rules provide a mechanism for specifying conditions to determine when CI/CD jobs run.&lt;br&gt;
Additionally, the use of &lt;code&gt;if&lt;/code&gt; statements within these rules allows for the dynamic definition of variables based on conditions such as branches or releases, offering flexibility in variable assignment.&lt;/p&gt;

&lt;p&gt;For our repository's CI/CD workflow, we start by building docker images and test them if there's a Merge Request.&lt;br&gt;
Only when everything checks out, we create a final version &lt;code&gt;TAG&lt;/code&gt; for release.&lt;br&gt;
Using rules reduced our build jobs by a &lt;strong&gt;half&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;❌ Instead of defining two jobs, for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;build-mr-jdk-11&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker build -t jdk-11:mr-validation&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$CI_PIPELINE_SOURCE == "merge_request_event"&lt;/span&gt;

&lt;span class="na"&gt;build-jdk-11&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;openjdk-11&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker build -t jdk-11:$CI_COMMIT_TAG&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$CI_COMMIT_TAG&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Use the same job and override the variables depending on the rules conditions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;build-jdk-11&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker build -t jdk-11:$TAG&lt;/span&gt;
&lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$CI_COMMIT_TAG&lt;/span&gt;
      &lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;TAG&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$CI_COMMIT_TAG"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$CI_PIPELINE_SOURCE == "merge_request_event"&lt;/span&gt;
      &lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;TAG&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mr-validation"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion: Lean back and let the magic happen 🧙‍♀
&lt;/h2&gt;

&lt;p&gt;In summary, applying these tips reduced our GitLab CI file from 1176 to 667 lines, significantly improving efficiency, manageability, and overall workflow.&lt;/p&gt;

&lt;p&gt;⭐⭐⭐ Enjoy your learning!!! ⭐⭐⭐&lt;/p&gt;

</description>
      <category>devops</category>
      <category>gitlab</category>
      <category>cicd</category>
      <category>pipelines</category>
    </item>
    <item>
      <title>Seamless GPG Key Migration: Moving Your Keys Across Machines</title>
      <dc:creator>Ridae HAMDANI</dc:creator>
      <pubDate>Fri, 29 Dec 2023 10:38:43 +0000</pubDate>
      <link>https://forem.com/ridaehamdani/seamless-gpg-key-migration-moving-your-keys-across-machines-2gha</link>
      <guid>https://forem.com/ridaehamdani/seamless-gpg-key-migration-moving-your-keys-across-machines-2gha</guid>
      <description>&lt;p&gt;I am using gpg key for a while to encrypt my files as well as sign my git commit, recently I migrated to a new virtual machine, but I come accross this challenge, how can I migrate my gpg keys to my new beast machine ?&lt;/p&gt;

&lt;p&gt;In the following article , I will share with you how I migrated my gpg keys.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feilmo0sskrpfuvf8xbzy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feilmo0sskrpfuvf8xbzy.png" alt="GPG Key Migration" width="720" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is a number of ways to backup or migrate your gpg keys :&lt;/p&gt;

&lt;h2&gt;
  
  
  Easiest way - Migrate all
&lt;/h2&gt;

&lt;p&gt;The first and the easiest way is to zip and copy all the gpg folder from one machine to an other, this will move all your configuration/keys, it is a good idea when the target machine is a newly fresh machine with no gpg keys yet.&lt;/p&gt;

&lt;p&gt;Usually the gpg configuration files are located in the directory ~/.gnupg&lt;/p&gt;

&lt;h2&gt;
  
  
  Migrate only specific keys
&lt;/h2&gt;

&lt;p&gt;The second way is to move only a specific key.&lt;br&gt;
1- Start by finding the key(s) id you want to migrate by using this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gpg --list-secret-keys --keyid-format LONG
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should returns something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sec   rsa4096/[**YOUR KEY ID**] 2024-03-30 [SC]
      ABCDEFGHIJKLMNOPQRSTUVWXYZ
uid                 [ unknown] username (KEY NAME) &amp;lt;user@domain&amp;gt;
ssb   rsa4096/ABCDEFGHIJKL 2024-03-30 [E]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the key size rsa4096/ is your key ID.&lt;/p&gt;

&lt;p&gt;2- Export the key in preparation to move it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gpg --export -a [your key id] &amp;gt; gpg-pub.asc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3- Prepare the secret key for migration (if password protected, you’ll be prompted to enter it)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gpg --export-secret-keys -a [your key] &amp;gt; gpg-sc.asc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4- Drag the key pair from the current directory to your USB stick or however else you move them (you can copy/past the first as they are just a text files).&lt;/p&gt;

&lt;p&gt;5- Once on the new machine, import them&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gpg --import gpg-pub.asc
$ gpg --import gpg-sc.asc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If password protected, you’ll be prompted to enter it&lt;/p&gt;

&lt;p&gt;⭐⭐⭐ Enjoy your learning….!!! ⭐⭐⭐&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Support My Blog ☕️&lt;/strong&gt;&lt;br&gt;
If you enjoy my technical blog posts and find them valuable, please consider &lt;a href="https://www.buymeacoffee.com/ridaeh"&gt;buying me a coffee at here&lt;/a&gt;. Your support goes a long way in helping me produce more quality articles and content. If you have any feedback or suggestions for improving my code, please leave a comment on this post or send me a message on my &lt;a href="https://www.linkedin.com/in/ridaehamdani/"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Understanding Allocatable Memory and CPU in Kubernetes Nodes</title>
      <dc:creator>Ridae HAMDANI</dc:creator>
      <pubDate>Wed, 04 Oct 2023 07:55:20 +0000</pubDate>
      <link>https://forem.com/ridaehamdani/understanding-allocatable-memory-and-cpu-in-kubernetes-nodes-4hbm</link>
      <guid>https://forem.com/ridaehamdani/understanding-allocatable-memory-and-cpu-in-kubernetes-nodes-4hbm</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4psi5e5h9rlgnj0ootp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4psi5e5h9rlgnj0ootp.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Kubernetes, allocatable resources refer to the portion of a node’s total resources that can be allocated to running containers or pods. These resources are distinct from the node’s total capacity, which includes all available CPU and memory resources. The key takeaway here is that not all resources on a node are available for use by workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to find allocatable resources capacity ?
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl describe node MY_NODE_1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;This command describe your node configuration, the Capacity &amp;amp; Allocatable infomation :&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

...
Capacity:
  cpu:                12
  ephemeral-storage:  31436544Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             49156248Ki
  pods:               110
Allocatable:
  cpu:                10500m
  ephemeral-storage:  28447630903
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             45908120Ki
  pods:               110
...


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;As we can see , my node has 12 CPU and only 10.5 CPU is allocatable , 1.5 CPU is reserved for my system + kubelet which is a lot . For the memory, the node has 49Gb and 4Gb are reserved for the OS &amp;amp; kube system , the remain 45Gb is available for the containers.&lt;/p&gt;
&lt;h2&gt;
  
  
  Understanding Node Capacity and Allocatable Resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9moaenxaa3h5nhen3ju.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9moaenxaa3h5nhen3ju.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s break down how Kubernetes calculates allocatable resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Total Node Capacity:&lt;/strong&gt; This is the sum of all the physical CPU and memory resources available on the node. It represents the maximum capacity the node can provide for running containers.&lt;/li&gt;
&lt;li&gt;*&lt;em&gt;Kubernetes system daemons Reserved resources *&lt;/em&gt;: Kubernetes sets aside a portion of the node’s resources for system daemons, such as the kubelet, container runtime, and system monitoring tools. These resources are reserved to ensure that critical cluster components can function effectively.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node OS Overhead&lt;/strong&gt;: Some resources are consumed by the operating system itself and are not available for workloads. This includes the kernel, system processes, and filesystem caches.
The formula to calculate allocatable resources is:
```
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Allocatable = Total Node Capacity - System Reserved Resources - Kube system daemons Reserved resources&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
## Why Allocatable Resources Matter ?
For several reasons:

- **Resource Guarantees:** Kubernetes ensures that pods receive the resources they request, and allocatable resources are what Kubernetes uses to make these guarantees. If there are not enough allocatable resources on a node to meet a pod’s requirements, scheduling that pod will fail.
- **Efficient Resource Utilization:** Efficiently allocating resources is essential for maximizing node utilization and optimizing costs. By managing allocatable resources effectively, you can prevent resource waste and overcommitting nodes.
- **Node Stability:** If a node runs out of allocatable resources, it can become unstable or unresponsive. Properly managing allocatable resources helps maintain node stability and prevents cluster disruptions.

## What quota should you use ?
The following article highlight the quota used by the GCP,AWS &amp;amp; Azure.

&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://learnk8s.io/allocatable-resources?source=post_page-----0d9c24b53827--------------------------------" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic.learnk8s.io%2F2f459b0416493403e14ea04caf12bd45.png" height="auto" class="m-0"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://learnk8s.io/allocatable-resources?source=post_page-----0d9c24b53827--------------------------------" rel="noopener noreferrer" class="c-link"&gt;
          Allocatable memory and CPU in Kubernetes Nodes
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Pods deployed in your Kubernetes cluster consume resources such as memory, CPU and storage. However, not all resources in a Node can be used to run Pods.
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic.learnk8s.io%2F42aedfb7aa4f77d9f4fccd385dc684df.ico"&gt;
        learnk8s.io
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;
&lt;h2&gt;
  
  
  How to configure the Node Allocatable resources ?
&lt;/h2&gt;

&lt;p&gt;1- Add the --kube-reserved flag to the kubelet command line with the desired CPU memory reservation value. For example, to reserve 500m CPU memory (0.5 CPU cores), add the following line to the [Service] section of the file:&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Environment="KUBELET_EXTRA_ARGS=--kube-reserved=cpu=500m,memory=100Mi"&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Make sure to specify the desired value according to your requirements.

2- Save the file and exit the text editor.

3- Reload the systemd service configuration to apply the changes:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;sudo systemctl daemon-reload&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;4- Restart the kubelet service to apply the new configuration:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;sudo systemctl restart kubelet&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;5- Verify that the kubelet has started successfully:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;sudo systemctl status kubelet&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

The kubelet should now be running with the updated CPU memory reservation. Repeat these steps on each node where you want to change the CPU memory reservation.

Keep in mind that modifying kubelet configuration on running production clusters can impact node performance, so be cautious and test changes in a controlled environment first. Additionally, ensure that you have backups and a rollback plan in case the changes cause any issues.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>What are the top 3 soft skills every software engineer should master?</title>
      <dc:creator>Ridae HAMDANI</dc:creator>
      <pubDate>Tue, 07 Jul 2020 10:00:02 +0000</pubDate>
      <link>https://forem.com/ridaehamdani/what-are-the-top-3-soft-skills-for-you-2dca</link>
      <guid>https://forem.com/ridaehamdani/what-are-the-top-3-soft-skills-for-you-2dca</guid>
      <description>&lt;p&gt;As a software engineer &lt;strong&gt;Hard skills&lt;/strong&gt; are so important for our career. However, a lot of software engineers don't focus on &lt;strong&gt;Soft skills&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So What are the top 3 soft skills from your point of view?&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>discuss</category>
      <category>question</category>
    </item>
    <item>
      <title>Do you use gitbook, try this awesome plugin I wrote</title>
      <dc:creator>Ridae HAMDANI</dc:creator>
      <pubDate>Sat, 20 Jun 2020 11:19:45 +0000</pubDate>
      <link>https://forem.com/ridaehamdani/do-you-use-gitbook-try-this-awnsome-plugin-i-wrote-1hh3</link>
      <guid>https://forem.com/ridaehamdani/do-you-use-gitbook-try-this-awnsome-plugin-i-wrote-1hh3</guid>
      <description>&lt;p&gt;It was a time, I was looking for a Gitbook plugin to documente my commands by showing a nice terminal in my page . However I haven't found any existing plugins that satisfy my needs so, as a developer, I decided to write my own plugin and share it with the community. &lt;/p&gt;

&lt;h1&gt;
  
  
  Say hello to Terminull:
&lt;/h1&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4u6brjgove0pifly6lx1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4u6brjgove0pifly6lx1.png" alt="Gitbook plugin terminull" width="782" height="156"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;a href="https://github.com/ridaeh/gitbook-plugin-terminull" rel="noopener noreferrer"&gt;&lt;strong&gt;Terminull&lt;/strong&gt;&lt;/a&gt; is a Gitbook plugin allows you to create a modern terminal for your Gitbook pages in order to documente your commands.&lt;/p&gt;

&lt;h1&gt;
  
  
  Features:
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Specify the context directory&lt;/li&gt;
&lt;li&gt;Add comment to the command&lt;/li&gt;
&lt;li&gt;Copy command by clicking on a button&lt;/li&gt;
&lt;li&gt;Show command output&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to use it ?
&lt;/h2&gt;

&lt;p&gt;To use Terminull plugin in your Gitbook project, add the terminull plugin to the book.json file of your project, then install plugins using gitbook install.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"plugins"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"terminull"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you are ready to write your hello world terminal.&lt;br&gt;
To create a terminal you can use Code markdown with &lt;code&gt;term&lt;/code&gt; as language.&lt;br&gt;
e.g :&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;

```&lt;strong&gt;term&lt;/strong&gt;
gitbook-plugin-terminull$ echo 'hello terminull' # This will print hello terminull
hello terminull
```


&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;As a result you will see this beautiful terminal in your page.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4u6brjgove0pifly6lx1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4u6brjgove0pifly6lx1.png" alt="Gitbook plugin terminull" width="782" height="156"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  You like my work ❗
&lt;/h1&gt;

&lt;p&gt;If you like this plugin you are welcome to made a pull requests to add more feature &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ridaeh/gitbook-plugin-terminull" rel="noopener noreferrer"&gt;https://github.com/ridaeh/gitbook-plugin-terminull&lt;/a&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>plugin</category>
      <category>documentation</category>
    </item>
    <item>
      <title>☸️ Some changes between Helm v2 and Helm v3 that you should know</title>
      <dc:creator>Ridae HAMDANI</dc:creator>
      <pubDate>Wed, 29 Jan 2020 13:04:47 +0000</pubDate>
      <link>https://forem.com/ridaehamdani/some-changes-between-helm-v2-and-helm-v3-that-you-should-know-32ga</link>
      <guid>https://forem.com/ridaehamdani/some-changes-between-helm-v2-and-helm-v3-that-you-should-know-32ga</guid>
      <description>&lt;p&gt;I was working on deploying an application on Kubernetes with Helm v3, then I decided to do the same while using Helm v2 this time to validate the possibility of migration from helm2 to helm3.&lt;br&gt;&lt;br&gt;
In this article, I want to share with you some of the changes between v2 &amp;amp; v3, that I faced in my task when I verified the validation of my Chart and when I deployed my application with these two helm versions.&lt;/p&gt;

&lt;p&gt;Helm v3 introduces some changes in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adios Tiller&lt;/li&gt;
&lt;li&gt;Helm v2 vs v3 commands.&lt;/li&gt;
&lt;li&gt;Chart apiVersion.&lt;/li&gt;
&lt;li&gt;Chart dependencies.&lt;/li&gt;
&lt;li&gt;Helm package command.&lt;/li&gt;
&lt;li&gt;Route object in chart.&lt;/li&gt;
&lt;li&gt;helm search command.&lt;/li&gt;
&lt;li&gt;How to migrate from helm2 to helm3?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;1. Adios Tiller:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
In Helm v3 tiller is gone, and there is only Helm client 😊.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Helm v2 vs v3 commands:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Some commands are not supported or renamed in Helm v3:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Command&lt;/th&gt;
&lt;th&gt;Helm2&lt;/th&gt;
&lt;th&gt;Helm3&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Initialize Helm client/server&lt;/td&gt;
&lt;td&gt;init&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Download a chart to your local directory&lt;/td&gt;
&lt;td&gt;fetch&lt;/td&gt;
&lt;td&gt;pull&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Given a release name, delete the release from Kubernetes&lt;/td&gt;
&lt;td&gt;delete&lt;/td&gt;
&lt;td&gt;uninstall&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Helm client environment information&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;env&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Displays the location of HELM_HOME&lt;/td&gt;
&lt;td&gt;home&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Inspect a chart&lt;/td&gt;
&lt;td&gt;inspect&lt;/td&gt;
&lt;td&gt;show&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Uninstalls Tiller from a cluster&lt;/td&gt;
&lt;td&gt;reset&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;3. Chart apiVersion:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Helm decides to increment the chart API version to v2 in Helm3:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Chart.yaml
-apiVersion: v1 # Helm2
+apiVersion: v2 # Helm3
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Chart dependencies:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Helm v2 chart has a specific file called 'requirements.yaml' where dependencies are added under the dependencies section. With Helm v3 this section are moved from &lt;strong&gt;requirements.yaml&lt;/strong&gt; to &lt;strong&gt;Chart.yaml&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
More information about this change can be found in the &lt;a href="https://helm.sh/docs/faq/#consolidation-of-requirements-yaml-into-chart-yaml"&gt;official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Helm package command:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Running helm package command with Helm v2 will raise a &lt;code&gt;directory name (foo) and Chart.yaml name (bar) must match&lt;/code&gt; error if the Chart name does not match the Chart root folder name.  With Helm v3 this constraint is not compulsory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Route object in Chart:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Unlike the Helm v2, route object in Helm v3 chart requires host and status fields.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="p"&gt;apiVersion: v1
kind: Route
metadata:
&lt;/span&gt;  name: {{ include "toto.name" . }}
&lt;span class="p"&gt;spec:
&lt;/span&gt;&lt;span class="gi"&gt;+  host: {{ .Values.host }}
&lt;/span&gt;  to:
    kind: Service
    name: {{ include "toto.name" . }}
    weight: 100
  port:
    targetPort: 'http'
  wildcardPolicy: None
&lt;span class="gi"&gt;+status: 
+  ingress: []
+  wildcardPolicy: None
&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;7. helm search command:&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;With &lt;strong&gt;helm v2&lt;/strong&gt; you can use &lt;code&gt;helm search [CHART_NAME]&lt;/code&gt; command to search for a chart in both your repo list and in helm hub.&lt;br&gt;&lt;br&gt;
With &lt;strong&gt;Helm v3&lt;/strong&gt; you have to specifie where to look for your chart .&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Usage:
  helm search [command]

Available Commands:
  hub         search for charts in the Helm Hub or an instance of Monocular
  repo        search repositories for a keyword in charts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;8. How to migrate from helm2 to helm3:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Helm team made a good helm plugin called &lt;code&gt;helm-2to3&lt;/code&gt; that &lt;strong&gt;facilitates&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Migration of Helm v2 configuration, data, plugin and releases to Helm v3.&lt;/li&gt;
&lt;li&gt;Clean up Helm v2 configuration, release data, and Tiller deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is full documentation about this plugin: &lt;a href="https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/"&gt;https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy Helming folks 😊❤️️ !!!&lt;/p&gt;

</description>
      <category>helm</category>
      <category>devops</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Why do you need to set limits for container resource usage?</title>
      <dc:creator>Ridae HAMDANI</dc:creator>
      <pubDate>Tue, 07 Jan 2020 15:08:10 +0000</pubDate>
      <link>https://forem.com/ridaehamdani/why-do-you-need-to-set-limits-for-container-resource-usage-2h4j</link>
      <guid>https://forem.com/ridaehamdani/why-do-you-need-to-set-limits-for-container-resource-usage-2h4j</guid>
      <description>&lt;h1&gt;
  
  
  Why do we need to set limits for container resource usage?
&lt;/h1&gt;

&lt;p&gt;Enforcing resource limits is a critical best practice for running containers on a host or shared platforms like Kubernetes, Openshift...&lt;/p&gt;

&lt;p&gt;If a container runs without a resource limitation, it may use all the resources available on the host, producing a disaster in production on a shared platform. This lack of deterministic resource usage could be a big problem for other applications on the host or container orchestrators. &lt;/p&gt;

&lt;p&gt;Setting resources limits will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;prevent applications from consuming more than their expected resources on the host.&lt;/li&gt;
&lt;li&gt;provide autoscaling controllers critical information needed to add and remove instances of a containerized service based on resource usage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Setting resources limits for a Docker container is not required by default.&lt;br&gt;
However, every Docker container gets its own Cgroup by default in order to permit you to set resources limits.&lt;/p&gt;

&lt;p&gt;Cgroups(Control groups) is a Linux kernel feature for managing and monitoring system resource like CPU, disk I/O, memory and bandwidth usage.&lt;br&gt;
it provides: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Resource limiting: a group can be configured not to exceed a specified resource limit.&lt;/li&gt;
&lt;li&gt;Prioritization: one or more groups may be configured to utilize fewer or more CPUs or disk I/O throughput.&lt;/li&gt;
&lt;li&gt;Accounting: a group's resource usage is monitored and measured.&lt;/li&gt;
&lt;li&gt;Control: groups of processes can be frozen or stopped and restarted.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cgroups partition those resources into groups then assigning tasks to those groups.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;Providing a stable product in production is a journey, in this brief article we learned the importance of limiting resources usage, the big challenges are to determine the container requirement resources and to configure containerized application runtimes ( e.g: JDK) to stick to the configured resources limits.  &lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>system</category>
    </item>
  </channel>
</rss>
