<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Narendra Chauhan</title>
    <description>The latest articles on Forem by Narendra Chauhan (@narendra_chauhan).</description>
    <link>https://forem.com/narendra_chauhan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/narendra_chauhan"/>
    <language>en</language>
    <item>
      <title>Nginx + PHP + MySQL Optimisations and Parameter Calculations</title>
      <dc:creator>Narendra Chauhan</dc:creator>
      <pubDate>Fri, 03 Apr 2026 06:38:00 +0000</pubDate>
      <link>https://forem.com/addwebsolutionpvtltd/nginx-php-mysql-optimisations-and-parameter-calculations-3min</link>
      <guid>https://forem.com/addwebsolutionpvtltd/nginx-php-mysql-optimisations-and-parameter-calculations-3min</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“Premature optimization is the root of all evil but lack of optimisation is the root of outages.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Architecture Overview&lt;/li&gt;
&lt;li&gt;Why Optimisation Is Required&lt;/li&gt;
&lt;li&gt;Nginx Optimisations &amp;amp; Parameter Calculations&lt;/li&gt;
&lt;li&gt;PHP-FPM Optimisations &amp;amp; Parameter Calculations&lt;/li&gt;
&lt;li&gt;MySQL Optimisations &amp;amp; Parameter Calculations&lt;/li&gt;
&lt;li&gt;System-Level Optimisations (Linux)&lt;/li&gt;
&lt;li&gt;Practical Example: Small vs Medium vs Large Server&lt;/li&gt;
&lt;li&gt;Interesting Facts &amp;amp; Statistics&lt;/li&gt;
&lt;li&gt;FAQs&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;Modern web applications rely heavily on the Nginx + PHP + MySQL (LEMP) stack. While default configurations work for testing, they are not suitable for production traffic.&lt;/p&gt;

&lt;p&gt;Optimisation ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Faster page load time&lt;/li&gt;
&lt;li&gt;Better concurrency handling&lt;/li&gt;
&lt;li&gt;Lower memory and CPU usage&lt;/li&gt;
&lt;li&gt;Higher stability under load&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This document explains what to optimise, why to optimise, and how to calculate parameters practically.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Architecture Overview
&lt;/h2&gt;

&lt;p&gt;A typical request flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Client sends HTTP request&lt;/li&gt;
&lt;li&gt;Nginx handles connection &amp;amp; static content&lt;/li&gt;
&lt;li&gt;PHP-FPM processes dynamic PHP requests&lt;/li&gt;
&lt;li&gt;MySQL serves data from database&lt;/li&gt;
&lt;li&gt;Response sent back to client&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each layer must be tuned together, not individually.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Why Optimisation Is Required
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Default settings:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are conservative&lt;/li&gt;
&lt;li&gt;Waste available RAM&lt;/li&gt;
&lt;li&gt;Limit concurrency&lt;/li&gt;
&lt;li&gt;Cause slow response under load&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Common Problems Without Optimisation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;502 / 504 Gateway errors&lt;/li&gt;
&lt;li&gt;High CPU load&lt;/li&gt;
&lt;li&gt;PHP-FPM “server reached max_children”&lt;/li&gt;
&lt;li&gt;MySQL “Too many connections”&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4. Nginx Optimisations &amp;amp; Parameter Calculations
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Key Nginx Parameters&lt;/strong&gt;&lt;br&gt;
worker_processes auto;&lt;br&gt;
worker_connections 4096;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is worker_processes&lt;/strong&gt;&lt;br&gt;
Number of worker processes&lt;br&gt;
Best practice: match CPU cores&lt;/p&gt;

&lt;p&gt;&lt;code&gt;nproc&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Example:&lt;/strong&gt;&lt;br&gt;
4 CPU cores → worker_processes 4;&lt;br&gt;
What is worker_connections&lt;br&gt;
Maximum connections per worker&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Total Max Connections&lt;/strong&gt;&lt;br&gt;
worker_processes × worker_connections&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;4 × 4096 = 16,384 concurrent connections&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended Extra Optimisations&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;use epoll;
multi_accept on;

sendfile on;
tcp_nopush on;
tcp_nodelay on;

keepalive_timeout 65;
keepalive_requests 1000;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. PHP-FPM Optimisations &amp;amp; Parameter Calculations
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Key PHP-FPM Settings&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm = dynamic
pm.max_children = 20
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How to Calculate pm.max_children&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Find average PHP process memory:
ps -ylC php-fpm --sort:rss&lt;/li&gt;
&lt;li&gt;Formula:
Available RAM for PHP / Avg PHP process size&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Available RAM: 2 GB&lt;/li&gt;
&lt;li&gt;Avg PHP process: 100 MB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2048 / 100 ≈ 20&lt;br&gt;
pm.max_children = 20&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Other Important PHP Optimisations&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;request_terminate_timeout = 60
max_execution_time = 60
memory_limit = 256M
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Enable OPcache&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;opcache.enable=1
opcache.memory_consumption=256
opcache.max_accelerated_files=20000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  6. MySQL Optimisations &amp;amp; Parameter Calculations
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Key MySQL Parameters&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;innodb_buffer_pool_size = 2G
innodb_buffer_pool_instances = 2
max_connections = 200
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How to Calculate innodb_buffer_pool_size&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allocate 60–70% of total RAM (dedicated DB server)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Server RAM: 4 GB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;4 × 70% ≈ 2.8 GB&lt;br&gt;
Use 2G or 3G&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Connection Calculation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Additional Recommended Settings&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query_cache_type = 0
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  7. System-Level Optimisations (Linux)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;File Descriptors&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ulimit -n 100000&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kernel Tuning&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;net.core.somaxconn = 65535&lt;br&gt;
net.ipv4.tcp_max_syn_backlog = 65535&lt;br&gt;
vm.swappiness = 10&lt;/p&gt;
&lt;h2&gt;
  
  
  8. Practical Server Size Examples
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Small Server (2 CPU / 2 GB RAM)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nginx workers: 2&lt;/li&gt;
&lt;li&gt;worker_connections: 2048&lt;/li&gt;
&lt;li&gt;PHP max_children: 10&lt;/li&gt;
&lt;li&gt;MySQL buffer pool: 1G&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Medium Server (4 CPU / 8 GB RAM)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nginx workers: 4&lt;/li&gt;
&lt;li&gt;worker_connections: 4096&lt;/li&gt;
&lt;li&gt;PHP max_children: 30–40&lt;/li&gt;
&lt;li&gt;MySQL buffer pool: 4–5G&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Large Server (8 CPU / 16 GB RAM)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nginx workers: 8&lt;/li&gt;
&lt;li&gt;worker_connections: 8192&lt;/li&gt;
&lt;li&gt;PHP max_children: 60–80&lt;/li&gt;
&lt;li&gt;MySQL buffer pool: 10–12G&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Practical Demonstration (Images Explained)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1. NGINX OPTIMISATION&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parameter Calculations&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Step 1.1  Backup current nginx.conf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  sudo cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 1.2  Check current config&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  cat /etc/nginx/nginx.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Screenshot: Current state before changes&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ulx50hm6zu2uul76f91.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ulx50hm6zu2uul76f91.png" alt=" " width="800" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1.3  Edit nginx.conf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/nginx/nginx.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Replace/update with this optimized config:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user www-data;
  worker_processes 2;                    # = nproc (2 cores)
  worker_rlimit_nofile 65535;
  pid /run/nginx.pid;
  include /etc/nginx/modules-enabled/*.conf;

  events {
      worker_connections 1024;           # 2 x 1024 = 2048 total connections
      use epoll;                         # Linux best event model
      multi_accept on;                   # Accept multiple connections at once
  }

  http {

      # Basic Settings
      sendfile on;
      tcp_nopush on;
      tcp_nodelay on;
      keepalive_timeout 30;              # Reduced from default 75s
      keepalive_requests 100;
      types_hash_max_size 2048;
      server_tokens off;                 # Hide nginx version

      client_max_body_size 20m;
      client_body_buffer_size 128k;
      client_header_buffer_size 1k;
      large_client_header_buffers 4 8k;
      client_body_timeout 12;
      client_header_timeout 12;
      send_timeout 10;

      include /etc/nginx/mime.types;
      default_type application/octet-stream;

      # Logging Settings
      access_log /var/log/nginx/access.log;
      error_log /var/log/nginx/error.log warn;   # Only warn+ to reduce I/O

      # Gzip Settings
      gzip on;
      gzip_vary on;
      gzip_proxied any;
      gzip_comp_level 3;                 # Level 3 = good ratio, low CPU
      gzip_min_length 1024;              # Don't compress tiny files
      gzip_buffers 16 8k;
      gzip_http_version 1.1;
      gzip_types
          text/plain
          text/css
          text/javascript
          application/javascript
          application/json
          application/xml
          image/svg+xml
          font/woff2;


      # Open File Cache
      open_file_cache max=1000 inactive=20s;
      open_file_cache_valid 30s;
      open_file_cache_min_uses 2;
      open_file_cache_errors on;

      # FastCGI Cache (optional - enable per site)
      fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=PHPCACHE:10m
                         max_size=100m inactive=60m use_temp_path=off;

      include /etc/nginx/conf.d/*.conf;
      include /etc/nginx/sites-enabled/*;
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 1.4  Create FastCGI cache directory&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; sudo mkdir -p /var/cache/nginx
  sudo chown www-data:www-data /var/cache/nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 1.5  Test and reload Nginx&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nginx -t
  sudo systemctl reload nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Screenshot: nginx -t showing syntax is ok and test is successful&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9z8ztz2au7j9h1wf5s6n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9z8ztz2au7j9h1wf5s6n.png" alt=" " width="800" height="126"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1.6  Verify Nginx is running with new settings&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nginx -T | grep -E "worker_processes|worker_connections|gzip|keepalive_timeout"
systemctl status nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Screenshot: Running status + key parameters&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhf2by2wlnnfzaj9mau0a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhf2by2wlnnfzaj9mau0a.png" alt=" " width="800" height="506"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2. PHP-FPM OPTIMISATION&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parameter Calculations&lt;/strong&gt;&lt;br&gt;
Available RAM for PHP-FPM: ~150MB (conservative, leaving room for MySQL + Nginx)&lt;br&gt;
  Average PHP-FPM process size: ~30-40MB&lt;br&gt;
  Formula: pm.max_children = 150 / 35 ≈ 4&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PHP-FPM Parameters Calculation&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;pm&lt;/strong&gt;&lt;br&gt;
 Formula: Dynamic (best for variable traffic)&lt;br&gt;
 Value: dynamic&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;pm.max_children&lt;/strong&gt;&lt;br&gt;
 Formula: 150MB ÷ 35MB/process&lt;br&gt;
 Value: 4&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;pm.start_servers&lt;/strong&gt;&lt;br&gt;
 Formula: pm.max_children / 2&lt;br&gt;
 Value: 2&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;pm.min_spare_servers&lt;/strong&gt;&lt;br&gt;
 Formula: pm.start_servers / 2&lt;br&gt;
 Value: 1&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;pm.max_spare_servers&lt;/strong&gt;&lt;br&gt;
 Formula: pm.start_servers&lt;br&gt;
 Value: 2&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;pm.max_requests&lt;/strong&gt;&lt;br&gt;
 Formula: Prevent memory leaks&lt;br&gt;
 Value: 500&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2.1  Check PHP-FPM version path&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  php -v
  ls /etc/php/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2.2  Backup pool config&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cp/etc/php/8.4/fpm/pool.d/www.conf/etc /php/8.4/fpm/pool.d/www.conf.bak
sudo cp /etc/php/8.4/fpm/php.ini /etc/php/8.4/fpm/php.ini.bak
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2.3  Check current PHP process memory usage&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run this to see actual PHP-FPM process sizes
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ps aux | grep php-fpm | grep -v grep | awk '{sum += $6} END {print "Total RSS:", sum/1024, "MB"; print "Count:", NR; print "Avg per process:", sum/NR/1024, "MB"}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Screenshot:&lt;/strong&gt; Current process sizes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54on5le82nhiwpw62eyt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54on5le82nhiwpw62eyt.png" alt=" " width="800" height="70"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2.4  Edit PHP-FPM pool config&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/php/8.4/fpm/pool.d/www.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Find and update these values (search with Ctrl+W in nano):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[www]
user = www-data
group = www-data

listen = /run/php/php8.4-fpm.sock
listen.owner = www-data
listen.group = www-data

pm = dynamic
pm.max_children = 4
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 2
pm.max_requests = 500     ;Restart workers after 500 requests (prevents memory leaks)
pm.process_idle_timeout = 10s

pm.status_path = /status  ; Enable FPM status page
ping.path = /ping

slowlog = /var/log/php8.4-fpm-slow.log
request_slowlog_timeout = 5s     ; Log requests taking &amp;gt; 5 seconds
security.limit_extensions = .php
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2.5  Tune PHP OPcache&lt;/strong&gt;&lt;br&gt;
  &lt;strong&gt;sudo nano /etc/php/8.4/mods-available/opcache.ini&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;zend_extension=opcache

  ; Enable OPcache
  opcache.enable=1
  opcache.enable_cli=0

  ; Memory: 64MB for low-RAM server
  opcache.memory_consumption=64
  opcache.interned_strings_buffer=8
  opcache.max_accelerated_files=10000

  ; Production settings (set validate_timestamps=0 in prod)
  opcache.validate_timestamps=1
  opcache.revalidate_freq=60

  opcache.save_comments=1
  opcache.max_wasted_percentage=10
  opcache.use_cwd=1

  ; JIT (PHP 8.x feature)
  opcache.jit_buffer_size=32M
  opcache.jit=1255
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2.6  Tune PHP.ini key values&lt;/strong&gt;&lt;br&gt;
  &lt;strong&gt;sudo nano /etc/php/8.4/fpm/php.ini&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Find and update:
  ; Memory limit per PHP process
  memory_limit = 128M

  ; Upload/POST limits
  upload_max_filesize = 20M
  post_max_size = 25M
  max_execution_time = 60
  max_input_time = 60

  ; Error handling (production)
  display_errors = Off
  log_errors = On
  error_log = /var/log/php_errors.log

  ; Session handling
  session.gc_maxlifetime = 1440
  session.cookie_httponly = 1
  session.cookie_secure = 1

  ; Disable dangerous functions
  disable_functions = exec,passthru,shell_exec,system,proc_open,popen
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2.7  Restart PHP-FPM and verify&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo php-fpm8.4 -t
  sudo systemctl restart php8.4-fpm
  sudo systemctl status php8.4-fpm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Screenshot: PHP-FPM status showing active (running)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qt80y78sp1pvjj0uc6q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qt80y78sp1pvjj0uc6q.png" alt=" " width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2.8  Verify OPcache is active&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;php -r "var_dump(opcache_get_status());" | head -30
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;- or check via CLI&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;php -i | grep -E "opcache|OPcache"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Screenshot: OPcache enabled status&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xbp914o9gh6ucb37o0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xbp914o9gh6ucb37o0s.png" alt=" " width="800" height="727"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2.9  Check PHP-FPM processes after restart&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  ps aux | grep php-fpm | grep -v grep
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;- Should show master + 2 worker processes (pm.start_servers=2)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Screenshot:&lt;/strong&gt; FPM process list&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bpg2ffx4sviu3jjhciz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bpg2ffx4sviu3jjhciz.png" alt=" " width="800" height="58"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3.  MYSQL OPTIMISATION&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parameter Calculations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;RAM budget for MySQL: ~200MB (out of 914MB total)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;innodb_buffer_pool_size&lt;/strong&gt;&lt;br&gt;
Formula: ~20% of RAM (shared server)&lt;br&gt;
Value: 192M&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;innodb_buffer_pool_instances&lt;/strong&gt;&lt;br&gt;
Formula: buffer_pool ÷ 128M&lt;br&gt;
Value: 1&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;max_connections&lt;/strong&gt;&lt;br&gt;
Formula: Low RAM = conservative&lt;br&gt;
Value: 50&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;innodb_log_file_size&lt;/strong&gt;&lt;br&gt;
Formula: 25% of buffer pool&lt;br&gt;
Value: 48M&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;tmp_table_size&lt;/strong&gt;&lt;br&gt;
Formula: Memory temporary tables&lt;br&gt;
Value: 16M&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;max_heap_table_size&lt;/strong&gt;&lt;br&gt;
 Formula: Same as tmp_table_size&lt;br&gt;
 Value: 16M&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;thread_cache_size&lt;/strong&gt;&lt;br&gt;
 Formula: Reuse threads&lt;br&gt;
 Value: 8&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;table_open_cache&lt;/strong&gt;&lt;br&gt;
 Formula: Open tables cache&lt;br&gt;
 Value: 400&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3.1  Check current MySQL config and status&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cp /etc/mysql/mysql.conf.d/mysqld.cnf /etc/mysql/mysql.conf.d/mysqld.cnf.bak
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;- Check current variables&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql -u root -p -e "SHOW VARIABLES LIKE 'innodb_buffer_pool%';"
  mysql -u root -p -e "SHOW VARIABLES LIKE 'max_connections';"
  mysql -u root -p -e "SHOW STATUS LIKE 'Threads_connected';"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Screenshot:&lt;/strong&gt; Current MySQL variables&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmox0cz0iyp8k74nux7u3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmox0cz0iyp8k74nux7u3.png" alt=" " width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3.2  Check actual MySQL memory usage&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql -u root -p -e "
  SELECT
    ROUND(@@innodb_buffer_pool_size/1024/1024, 0) AS 'Buffer Pool MB',
    ROUND(@@key_buffer_size/1024/1024, 0) AS 'Key Buffer MB',
    @@max_connections AS 'Max Connections',
    @@thread_stack/1024 AS 'Thread Stack KB';
  "
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Screenshot:&lt;/strong&gt; Current MySQL memory config&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3l1e1yqi9eaht5w3qpx8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3l1e1yqi9eaht5w3qpx8.png" alt=" " width="758" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3.3  Edit MySQL config&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Add/update under [mysqld]:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[mysqld]
  pid-file        = /var/run/mysqld/mysqld.pid
  socket          = /var/run/mysqld/mysqld.sock
  datadir         = /var/lib/mysql
  log-error       = /var/log/mysql/error.log

  #
  # ===== MEMORY SETTINGS =====
  # Server has 914MB RAM - allocate ~200MB for MySQL
  #
  innodb_buffer_pool_size         = 192M    # Main InnoDB cache (most important!)
  innodb_buffer_pool_instances    = 1       # 1 instance (&amp;lt; 1GB pool)
  innodb_log_file_size            = 48M     # ~25% of buffer pool
  innodb_log_buffer_size          = 8M
  innodb_flush_log_at_trx_commit  = 2       # Slight risk, big perf gain (use 1 for strict ACID)

  #
  # ===== CONNECTION SETTINGS =====
  #
  max_connections         = 50             # Low RAM = keep connections limited
  thread_cache_size       = 8             # Reuse threads, avoid creation overhead
  wait_timeout            = 120           # Kill idle connections after 2 min
  interactive_timeout     = 120

  #
  # ===== QUERY CACHE (removed in MySQL 8.0, skip) =====
  # MySQL 8.0 removed query_cache - use ProxySQL or app-level cache

  #
  # ===== TABLE CACHE =====
  #
  table_open_cache        = 400
  table_definition_cache  = 400

  #
  # ===== TEMP TABLES =====
  #
  tmp_table_size          = 16M
  max_heap_table_size     = 16M

  #
  # ===== InnoDB I/O =====
  #
  innodb_file_per_table           = 1
  innodb_flush_method             = O_DIRECT  # Avoid double buffering with OS cache
  innodb_read_io_threads          = 2         # = CPU cores
  innodb_write_io_threads         = 2         # = CPU cores

  #
  # ===== SLOW QUERY LOG =====
  #
  slow_query_log          = 1
  slow_query_log_file     = /var/log/mysql/slow.log
  long_query_time         = 2             # Log queries &amp;gt; 2 seconds
  log_queries_not_using_indexes = 1

  #
  # ===== BINARY LOG (disable if not using replication) =====
  #
  # skip-log-bin                           # Uncomment if no replication needed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3.4  Validate and restart MySQL&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mysqld --validate-config
  sudo systemctl restart mysql
  sudo systemctl status mysql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Screenshot:&lt;/strong&gt; MySQL status active&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7v681msbropem0xbz3a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7v681msbropem0xbz3a.png" alt=" " width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3.5  Verify new MySQL variables&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql -u root -p -e "
  SELECT 'innodb_buffer_pool_size' AS Variable,
         ROUND(@@innodb_buffer_pool_size/1024/1024,0) AS 'Value (MB)'
  UNION SELECT 'max_connections', @@max_connections
  UNION SELECT 'innodb_log_file_size MB', ROUND(@@innodb_log_file_size/1024/1024,0)
  UNION SELECT 'tmp_table_size MB', ROUND(@@tmp_table_size/1024/1024,0)
  UNION SELECT 'thread_cache_size', @@thread_cache_size;
  "
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Screenshot:&lt;/strong&gt; New values confirmed&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fux8rtbud97okrc2amssb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fux8rtbud97okrc2amssb.png" alt=" " width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3.6  Check MySQL memory usage post-restart&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  free -h
  ps aux | sort -k6 -rn | head -10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Screenshot:&lt;/strong&gt; Overall memory after all services tuned&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fxt48wa7i5j3vmc5bfg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fxt48wa7i5j3vmc5bfg.png" alt=" " width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Measure first, tune second, and monitor always.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Step 4. FINAL VERIFICATION&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4.1  Check all services running&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  systemctl status nginx php8.4-fpm mysql --no-pager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4.2  Full memory picture&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;free -h
  ps aux --sort=-%mem | head -15
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4.3  Check nginx + PHP working together&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Test nginx config is still valid
  sudo nginx -t

 Test PHP-FPM socket exists
  ls -la /run/php/php8.4-fpm.sock

 Check FPM status (if you added status page to your site config)
  curl http://localhost/status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4.4  Check MySQL slow log is active&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql -u root -p -e "SHOW VARIABLES LIKE 'slow_query%';"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4.5  Final health summary&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "=== NGINX ===" &amp;amp;&amp;amp; nginx -v &amp;amp;&amp;amp; systemctl is-active nginx
  echo "=== PHP-FPM ===" &amp;amp;&amp;amp; php -v | head -1 &amp;amp;&amp;amp; systemctl is-active php8.4-fpm
  echo "=== MYSQL ===" &amp;amp;&amp;amp; mysql --version &amp;amp;&amp;amp; systemctl is-active mysql
  echo "=== MEMORY ===" &amp;amp;&amp;amp; free -h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Screenshot:&lt;/strong&gt; Final health check&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrpebst3v962c3sz682m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrpebst3v962c3sz682m.png" alt=" " width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your server is fully optimised. All phases verified:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Phase 0  Swap 2GB active&lt;/li&gt;
&lt;li&gt;  Phase 1  Nginx tuned&lt;/li&gt;
&lt;li&gt;  Phase 2  PHP-FPM + OPcache + JIT enabled&lt;/li&gt;
&lt;li&gt;  Phase 3  MySQL InnoDB / connections / slow query log active&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  9. Interesting Facts &amp;amp; Statistics
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;1 second delay can reduce conversions by 7%&lt;/li&gt;
&lt;li&gt;OPcache can improve PHP performance by 2–3×&lt;/li&gt;
&lt;li&gt;MySQL buffer pool cache hit ratio above 99% is ideal&lt;/li&gt;
&lt;li&gt;Nginx handles 10× more concurrent connections than traditional servers&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  11. FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q1. Should I optimise Nginx, PHP, or MySQL first?&lt;/strong&gt;&lt;br&gt;
Start with PHP-FPM and MySQL, then tune Nginx.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q2. Can wrong tuning crash the server?&lt;/strong&gt;&lt;br&gt;
Yes. Over-allocating RAM causes OOM kills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q3. Are these values fixed forever?&lt;/strong&gt;&lt;br&gt;
No. Recalculate after traffic growth.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q4. Do I need load testing?&lt;/strong&gt;&lt;br&gt;
Yes. Use tools like ab, wrk, or k6.&lt;/p&gt;

&lt;h2&gt;
  
  
  12. Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Optimisation is calculation-based, not guesswork&lt;/li&gt;
&lt;li&gt;PHP-FPM memory calculation is critical&lt;/li&gt;
&lt;li&gt;MySQL buffer pool has the biggest performance impact&lt;/li&gt;
&lt;li&gt;Nginx handles concurrency, not application logic&lt;/li&gt;
&lt;li&gt;Monitoring is mandatory after tuning&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  13. Conclusion
&lt;/h2&gt;

&lt;p&gt;Optimising Nginx + PHP + MySQL is not about copying configs from the internet—it is about understanding server resources, calculating limits, and balancing load across layers.&lt;br&gt;
&lt;strong&gt;A well-optimised stack:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Handles higher traffic&lt;/li&gt;
&lt;li&gt;Reduces downtime&lt;/li&gt;
&lt;li&gt;Improves user experience&lt;/li&gt;
&lt;li&gt;Saves infrastructure cost&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;About the Author:&lt;em&gt;Narendra is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/" rel="noopener noreferrer"&gt;AddWebSolution&lt;/a&gt;, specializing in automating infrastructure to improve efficiency and reliability.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>serveroptimization</category>
      <category>lempstack</category>
      <category>webperf</category>
      <category>nginxphpmysql</category>
    </item>
    <item>
      <title>Prometheus and Grafana Monitoring for a Node.js API</title>
      <dc:creator>Narendra Chauhan</dc:creator>
      <pubDate>Thu, 05 Mar 2026 11:26:12 +0000</pubDate>
      <link>https://forem.com/addwebsolutionpvtltd/prometheus-and-grafana-monitoring-for-a-nodejs-api-1bn</link>
      <guid>https://forem.com/addwebsolutionpvtltd/prometheus-and-grafana-monitoring-for-a-nodejs-api-1bn</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“Monitoring is not about collecting data, it's about gaining visibility into what truly matters.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;Architecture Overview&lt;/li&gt;
&lt;li&gt;Step 1: Create a Sample Node.js API&lt;/li&gt;
&lt;li&gt;Step 2: Dockerize the Node.js App&lt;/li&gt;
&lt;li&gt;Step 3: Configure Prometheus&lt;/li&gt;
&lt;li&gt;Step 4: Run Prometheus &amp;amp; Grafana with Docker&lt;/li&gt;
&lt;li&gt;Step 5: Configure Grafana Dashboard&lt;/li&gt;
&lt;li&gt;Step 6: Verify Metrics &amp;amp; Simulate Load&lt;/li&gt;
&lt;li&gt;Interesting Facts &amp;amp; Statistics&lt;/li&gt;
&lt;li&gt;FAQs&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;Modern applications rely heavily on APIs, and Node.js APIs are widely used due to their high performance and scalability. However, without proper monitoring, even small issues - such as high latency or memory leaks - can quickly escalate into serious production outages.&lt;/p&gt;

&lt;p&gt;In this Proof of Concept (POC), we demonstrate how to monitor a Node.js API using Prometheus for metrics collection and Grafana for visualization and alerting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Goals of this POC&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitor API performance in real time&lt;/li&gt;
&lt;li&gt;Identify latency and error issues&lt;/li&gt;
&lt;li&gt;Visualize metrics using dashboards&lt;/li&gt;
&lt;li&gt;Prepare a production-ready observability foundation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. Prerequisites
&lt;/h2&gt;

&lt;p&gt;You should have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Linux or macOS system&lt;/li&gt;
&lt;li&gt;Docker &amp;amp; Docker Compose installed&lt;/li&gt;
&lt;li&gt;Basic knowledge of Node.js&lt;/li&gt;
&lt;li&gt;Required open ports: 3000, 9090, 3001&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. Architecture Overview
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Architecture Flow&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node.js API exposes a /metrics endpoint&lt;/li&gt;
&lt;li&gt;Prometheus scrapes metrics every 5 seconds&lt;/li&gt;
&lt;li&gt;Grafana visualizes metrics from Prometheus&lt;/li&gt;
&lt;li&gt;Alerts trigger when defined thresholds are exceeded&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4. Step 1: Create a Sample Node.js API
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Create project directory&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;mkdir nodejs-monitoring-poc&lt;/li&gt;
&lt;li&gt;cd nodejs-monitoring-poc&lt;/li&gt;
&lt;li&gt;mkdir app&lt;/li&gt;
&lt;li&gt;cd app&lt;/li&gt;
&lt;li&gt;npm init -y&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Install dependencies&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;npm install express prom-client&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Create index.js&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require('express');
const client = require('prom-client');
const app = express();
const PORT = 3000;

// Collect default system metrics
client.collectDefaultMetrics({ timeout: 5000 });
// Custom HTTP request counter
const httpRequestCounter = new client.Counter({
 name: 'http_requests_total',
 help: 'Total number of HTTP requests',
 labelNames: ['method', 'route', 'status']
});

// Middleware to track requests
app.use((req, res, next) =&amp;gt; {
 res.on('finish', () =&amp;gt; {
   httpRequestCounter.labels(req.method, req.path, res.statusCode).inc();
 });
 next();
});

// Sample API endpoint
app.get('/api/hello', (req, res) =&amp;gt; {
 res.json({ message: 'Hello from Node.js API' });
});

// Metrics endpoint
app.get('/metrics', async (req, res) =&amp;gt; {
 res.set('Content-Type', client.register.contentType);
 res.end(await client.register.metrics());
});

app.listen(PORT, () =&amp;gt; {
 console.log(`Node.js app running on port ${PORT}`);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Run locally&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;node index.js&lt;br&gt;
&lt;strong&gt;Test&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;curl &lt;a href="http://localhost:3000/api/hello" rel="noopener noreferrer"&gt;http://localhost:3000/api/hello&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;curl &lt;a href="http://localhost:3000/metrics" rel="noopener noreferrer"&gt;http://localhost:3000/metrics&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Step 2: Dockerize the Node.js App
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Create a Dockerfile inside the app/ directory:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  6. Step 3: Configure Prometheus
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Create Prometheus directory&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ..
mkdir prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create prometheus.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'docker'
    static_configs:
      - targets: ['grafana:3000', 'prometheus:9090']
      - targets: ['poc-addweb-app.addwebprojects.com']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  7. Step 4: Run Prometheus &amp;amp; Grafana with Docker Compose
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Create docker-compose.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.8'

services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    volumes:
      - ./.prome/prometheus.yml:/etc/prometheus/prometheus.yml
    ports:
      - "9090:9090"

  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin
    ports:
      - "3003:3000"
    depends_on:
      - prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Start the stack&lt;/strong&gt;&lt;br&gt;
docker-compose up -d&lt;br&gt;
&lt;strong&gt;Verify services&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node.js API → &lt;a href="https://poc-addweb-app.addwebprojects.com" rel="noopener noreferrer"&gt;https://poc-addweb-app.addwebprojects.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Prometheus → &lt;a href="https://promotheus.65.1.109.206.nip.io" rel="noopener noreferrer"&gt;https://promotheus.65.1.109.206.nip.io&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Grafana → &lt;a href="https://poc-grafana.addwebprojects.com" rel="noopener noreferrer"&gt;https://poc-grafana.addwebprojects.com&lt;/a&gt;
&lt;strong&gt;Default Grafana credentials&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Username: admin&lt;br&gt;
Password: admin&lt;/p&gt;
&lt;h2&gt;
  
  
  8. Step 5: Configure Grafana Dashboard
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Add Prometheus Data Source&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Grafana → Settings → Data Sources&lt;/li&gt;
&lt;li&gt;Select Prometheus&lt;/li&gt;
&lt;li&gt;URL: &lt;a href="https://promotheus.65.1.109.206.nip.io" rel="noopener noreferrer"&gt;https://promotheus.65.1.109.206.nip.io&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Click Save &amp;amp; Test
&lt;strong&gt;Create Dashboard Panels (PromQL)&lt;/strong&gt;
&lt;strong&gt;Total Requests&lt;/strong&gt;
sum(node_app_total_requests)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Uptime Second&lt;/strong&gt;&lt;br&gt;
rate(node_app_uptime_seconds])&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory Usage&lt;/strong&gt;&lt;br&gt;
Process_resident_memory_bytes&lt;br&gt;
grafana_Process_resident_memory_bytes&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“In production, everything fails eventually. Monitoring tells you when  and why.”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  9. Step 6: Verify Metrics &amp;amp; Simulate Load
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Generate traffic&lt;/strong&gt;&lt;br&gt;
sudo apt  install wrk -y&lt;br&gt;
wrk -t8 -c500 -d120s &lt;a href="https://poc-addweb-app.addwebprojects.com/" rel="noopener noreferrer"&gt;https://poc-addweb-app.addwebprojects.com/&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Verify&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus → Status → Targets → UP&lt;/li&gt;
&lt;li&gt;Grafana dashboard shows increasing metrics&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Practical Demonstration (Images Explained)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Verify Prometheus Targets Are Healthy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the Prometheus Targets page at &lt;a href="https://promotheus.65.1.109.206.nip.io/targets" rel="noopener noreferrer"&gt;https://promotheus.65.1.109.206.nip.io/targets&lt;/a&gt; to confirm all scrape targets are reachable. The "State" column must show "UP" for every endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb19ilrlhzxdpem0bvv0g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb19ilrlhzxdpem0bvv0g.png" alt=" " width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 1: Prometheus Target Health page displaying all three endpoints under the "docker" job (poc-addweb-app, grafana:3000, prometheus:9090) with State = UP and scrape latency under 30 ms. This confirms that Prometheus is successfully collecting metrics from the Node.js application, its own internal metrics, and Grafana.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Access the Grafana Dashboard&lt;/strong&gt;&lt;br&gt;
Open the Grafana web interface at &lt;a href="https://poc-grafana.addwebprojects.com" rel="noopener noreferrer"&gt;https://poc-grafana.addwebprojects.com&lt;/a&gt;. The default login credentials are:&lt;br&gt;
    Username: admin&lt;br&gt;
    Password: admin&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpyzz29u8rd8kmziaikf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpyzz29u8rd8kmziaikf.png" alt=" " width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 2:&lt;/strong&gt; Grafana v12.3.3 login page served at poc-grafana.addwebprojects.com. After authentication, you are redirected to the Home dashboard where data sources and panels can be configured.&lt;br&gt;
Open the "Node-Application" dashboard in Grafana. The "Up Status" panel uses the PromQL query "up" to display a binary availability indicator (1 = reachable, 0 = unreachable) for each monitored target.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2sjfovkviv7fw8n3x444.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2sjfovkviv7fw8n3x444.png" alt=" " width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 3:&lt;/strong&gt; Grafana "Up Status" panel showing all three services (Grafana at grafana:3000, Node.js app at poc-addweb-app.addwebprojects.com, and Prometheus at prometheus:9090) maintaining a constant value of 1 over the last 5 minutes, confirming 100% availability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Observe Baseline Request Volume (Before Load Test)&lt;/strong&gt;&lt;br&gt;
Before generating any synthetic traffic, examine the "node_app_total_requests" panel to establish a baseline. Under normal operating conditions, the request counter should show a slow, steady increase driven only by Prometheus scrape requests and occasional organic traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9s43t6n2plng2nl5syny.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9s43t6n2plng2nl5syny.png" alt=" " width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 4:&lt;/strong&gt; Grafana "node_app_total_requests" panel displaying the baseline metric. The counter shows approximately 333,454 to 333,476 total requests over a 5-minute window, with a gentle linear slope. This confirms the application is healthy and processing only background scrape traffic at this point.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Generate Synthetic Load with wrk&lt;/strong&gt;&lt;br&gt;
To simulate production-level traffic, use the wrk HTTP benchmarking tool to generate a sustained burst of concurrent requests against the Node.js API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    wrk -t8 -c500 -d120s https://poc-addweb-app.addwebprojects.com/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command spawns 8 threads maintaining 500 concurrent connections for 120 seconds, producing significant load on the application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dgrtx40amu0mzzve2an.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dgrtx40amu0mzzve2an.png" alt=" " width="800" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 5:&lt;/strong&gt; Terminal output of the wrk load test. Results show 203,609 total requests completed in 2 minutes at an average rate of ~1,696 requests/second. Average latency was 291.55 ms with a maximum of 956.07 ms. The 74.69 MB data transfer confirms sustained throughput throughout the test duration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Analyze Post-Load-Test Metrics in Grafana&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After the load test completes, return to the Grafana dashboard to observe the impact. The metrics should show a dramatic spike correlating with the load test window, demonstrating that the monitoring pipeline correctly captured the traffic surge in real time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1sofmb1k7aofc5nx1z8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1sofmb1k7aofc5nx1z8.png" alt=" " width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 6:&lt;/strong&gt; Complete Grafana "Node-Application" dashboard captured during the load test. Key observations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Up Status: All three services remained UP throughout the test (no downtime).&lt;/li&gt;
&lt;li&gt;node_app_uptime_seconds: Uptime counter continued incrementing normally (14,200 seconds).&lt;/li&gt;
&lt;li&gt;node_app_total_requests: Steep spike from 335,000 to 550,000 during the 2-minute load window.&lt;/li&gt;
&lt;li&gt;grafana_alerting_request_duration_seconds_bucket: Alert evaluation latency remained stable.&lt;/li&gt;
&lt;li&gt;prometheus_sd_inode_failures_total: No service discovery failures detected (flat at 0).&lt;/li&gt;
&lt;li&gt;process_resident_memory_bytes: Memory usage remained stable for both Grafana (80 MB) and           Prometheus (140 MB), indicating no memory leaks under load.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A closer look at the "node_app_total_requests" panel reveals the exact moment the load test began and the subsequent plateau once it completed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazszc2o6s32vpnevzly0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazszc2o6s32vpnevzly0.png" alt=" " width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 7:&lt;/strong&gt; Zoomed view of "node_app_total_requests" showing the dramatic increase from 335,000 to 540,000 during the wrk load test (approximately 17:42:30 to 17:43:30). After the test concluded, the counter flattened back to its baseline growth rate, confirming that the spike was entirely caused by the synthetic load. This panel validates that the custom prom-client Counter metric accurately tracks every HTTP request processed by the Node.js application.&lt;/p&gt;

&lt;p&gt;Summary: The practical demonstration confirms that the Prometheus + Grafana monitoring stack is fully operational. All scrape targets are healthy, baseline metrics are collected correctly, and the system accurately reflects real-time traffic changes under load. The dashboards provide immediate visibility into application performance, making this setup suitable for production monitoring scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  10. Interesting Facts &amp;amp; Statistics
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Monitoring reduces MTTR by 50–60%&lt;/li&gt;
&lt;li&gt;APIs without metrics often fail silently&lt;/li&gt;
&lt;li&gt;Prometheus is used by thousands of production systems worldwide&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  11. FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q: Can I use this in production?&lt;/strong&gt;&lt;br&gt;
Yes, with persistent storage, authentication, and proper alerting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Does this support Kubernetes?&lt;/strong&gt;&lt;br&gt;
Absolutely. The same metrics approach applies to Kubernetes deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Can I add logs?&lt;/strong&gt;&lt;br&gt;
Yes, by integrating Grafana Loki.&lt;/p&gt;

&lt;h2&gt;
  
  
  12. Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Metrics are critical for API reliability&lt;/li&gt;
&lt;li&gt;Prometheus is lightweight and scalable&lt;/li&gt;
&lt;li&gt;Grafana simplifies observability and alerting&lt;/li&gt;
&lt;li&gt;This POC closely mirrors real production setups&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  13. Conclusion
&lt;/h2&gt;

&lt;p&gt;This practical POC demonstrates how to monitor a Node.js API end-to-end using Prometheus and Grafana.&lt;br&gt;
By implementing monitoring early, teams gain better visibility, faster incident response, and improved operational stability.&lt;/p&gt;

&lt;p&gt;About the Author: &lt;em&gt;Narendra is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/" rel="noopener noreferrer"&gt;AddWebSolution&lt;/a&gt;, specializing in automating infrastructure to improve efficiency and reliability.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>nodejsmonitoring</category>
      <category>prometheus</category>
      <category>grafana</category>
      <category>devopsobservability</category>
    </item>
    <item>
      <title>Node.js Application - CI/CD with Bitbucket Pipelines on AWS EC2</title>
      <dc:creator>Narendra Chauhan</dc:creator>
      <pubDate>Wed, 11 Feb 2026 06:40:07 +0000</pubDate>
      <link>https://forem.com/addwebsolutionpvtltd/nodejs-application-cicd-with-bitbucket-pipelines-on-aws-ec2-43he</link>
      <guid>https://forem.com/addwebsolutionpvtltd/nodejs-application-cicd-with-bitbucket-pipelines-on-aws-ec2-43he</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"Automation is the key to scaling software delivery."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Project Overview&lt;/li&gt;
&lt;li&gt;Architecture &amp;amp; Deployment Flow&lt;/li&gt;
&lt;li&gt;Setup Bitbucket Pipelines&lt;/li&gt;
&lt;li&gt;Interesting Facts &amp;amp; Statistics&lt;/li&gt;
&lt;li&gt;Frequently Asked Questions (FAQs)&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In modern software development, automation, reliability, and speed are critical for successful application delivery. Continuous Integration and Continuous Deployment (CI/CD) pipelines help developers deploy applications faster with minimal manual intervention.&lt;/p&gt;

&lt;p&gt;This document explains a Proof of Concept (POC) where a Node.js login application is deployed automatically to an AWS EC2 instance using Bitbucket Pipelines, managed by PM2, and served securely via Nginx as a reverse proxy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Overview
&lt;/h2&gt;

&lt;p&gt;This project demonstrates an end-to-end CI/CD workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Node.js login application hosted in Bitbucket&lt;/li&gt;
&lt;li&gt;Automated deployment triggered on code push to the prd branch&lt;/li&gt;
&lt;li&gt;Secure server access via SSH&lt;/li&gt;
&lt;li&gt;Application process management using PM2&lt;/li&gt;
&lt;li&gt;Public access via Nginx reverse proxy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The deployed application is accessible through a public URL, validating the successful CI/CD pipeline execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture &amp;amp; Deployment Flow
&lt;/h2&gt;

&lt;p&gt;This CI/CD pipeline follows a practical, step-by-step deployment flow using Bitbucket Pipelines and AWS EC2.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High-level flow:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Developer pushes code to the prd branch&lt;/li&gt;
&lt;li&gt;Bitbucket Pipelines is triggered automatically&lt;/li&gt;
&lt;li&gt;Pipeline connects to AWS EC2 using SSH&lt;/li&gt;
&lt;li&gt;Latest code is pulled from Bitbucket&lt;/li&gt;
&lt;li&gt;Dependencies are installed&lt;/li&gt;
&lt;li&gt;Application is restarted using PM2&lt;/li&gt;
&lt;li&gt;Nginx routes HTTP traffic to the Node.js app&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This ensures zero manual deployment effort and consistent releases.&lt;/p&gt;

&lt;h2&gt;
  
  
  EC2 server setup (one-time)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1.1 Install Node.js + Git + Nginx&lt;/strong&gt;&lt;br&gt;
sudo apt update&lt;br&gt;
sudo apt install -y git nginx curl&lt;br&gt;
Install Node.js (recommended using NodeSource, example Node 20):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;curl -fsSL &lt;a href="https://deb.nodesource.com/setup_20.x" rel="noopener noreferrer"&gt;https://deb.nodesource.com/setup_20.x&lt;/a&gt; | sudo -E bash -&lt;/li&gt;
&lt;li&gt;sudo apt install -y nodejs&lt;/li&gt;
&lt;li&gt;node -v&lt;/li&gt;
&lt;li&gt;&lt;p&gt;npm -v&lt;br&gt;
&lt;strong&gt;1.2 Install PM2 globally&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;sudo npm i -g pm2&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;pm2 -v&lt;br&gt;
&lt;strong&gt;1.3 Create project directory&lt;/strong&gt;&lt;br&gt;
Example:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;sudo mkdir -p /var/www/myapp&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;sudo chown -R deploy:deploy /var/www/myapp&lt;br&gt;
&lt;strong&gt;2) Clone web Application repo once (manual first time)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;cd /var/www/myapp&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;git clone  .&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2.1 Install and start using PM2&lt;/strong&gt;&lt;br&gt;
Example: if your app entry is server.js and runs on port 3000:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;npm ci&lt;/li&gt;
&lt;li&gt;pm2 start server.js --name myapp&lt;/li&gt;
&lt;li&gt;pm2 save&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enable PM2 auto-start on reboot:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;pm2 startup&lt;/li&gt;
&lt;li&gt;pm2 save&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Nginx reverse proxy (one-time)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;3.1 Create Nginx config&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;sudo nano /etc/nginx/sites-available/myapp
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
  listen 80;
  server_name yourdomain.com www.yourdomain.com;
 location / {
    proxy_pass http://127.0.0.1:3000;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Enable site:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/&lt;/li&gt;
&lt;li&gt;sudo nginx -t&lt;/li&gt;
&lt;li&gt;&lt;p&gt;sudo systemctl restart nginx&lt;br&gt;
&lt;strong&gt;(Optional SSL with Let’s Encrypt):&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;sudo apt install -y certbot python3-certbot-nginx&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;sudo certbot --nginx -d yourdomain.com -d &lt;a href="http://www.yourdomain.com" rel="noopener noreferrer"&gt;www.yourdomain.com&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Setup Bitbucket Pipelines
&lt;/h2&gt;

&lt;p&gt;This section explains the practical CI/CD implementation step by step, exactly as executed in the Bitbucket Pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SSH key for Bitbucket Pipelines → EC2 (required)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a deploy key pair (local machine)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On your local PC:&lt;/strong&gt;&lt;br&gt;
    ssh-keygen -t ed25519 -C "bitbucket-pipeline" -f bb_pipeline_key&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You will get:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;bb_pipeline_key (private)&lt;/li&gt;
&lt;li&gt;bb_pipeline_key.pub (public)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4.2 Add public key to EC2&lt;/strong&gt;&lt;br&gt;
On EC2 as deploy user:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;mkdir -p ~/.ssh&lt;/li&gt;
&lt;li&gt;chmod 700 ~/.ssh&lt;/li&gt;
&lt;li&gt;nano ~/.ssh/authorized_keys&lt;/li&gt;
&lt;li&gt;Paste content of bb_pipeline_key.pub, then:**&lt;/li&gt;
&lt;li&gt;chmod 600 ~/.ssh/authorized_keys&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4.3 Add private key to Bitbucket repository variables&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In Bitbucket:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Repo → Repository settings → Pipelines → Repository variables
Add:&lt;/li&gt;
&lt;li&gt;SSH_PRIVATE_KEY = (paste full content of bb_pipeline_key)&lt;/li&gt;
&lt;li&gt;SSH_USER = User&lt;/li&gt;
&lt;li&gt;SSH_HOST = your EC2 public IP / domain&lt;/li&gt;
&lt;li&gt;APP_DIR = /var/www/myapp&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Also add:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;KNOWN_HOST (optional but recommended) or use ssh-keyscan in pipeline (we will do ssh-keyscan)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Bitbucket Pipeline YAML Configuration&lt;/strong&gt;&lt;br&gt;
    1. vim bitbucket-pipelines.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;image: node:24

pipelines:
 branches:
   prd:
     - step:
         name: Deploy to Production
         deployment: Production
         script:
           - apt-get update &amp;amp;&amp;amp; apt-get install -y openssh-client
           - mkdir -p ~/.ssh
           - ssh-keyscan $SSH_HOST &amp;gt;&amp;gt; ~/.ssh/known_hosts


           - |
             ssh $SSH_USER@$SSH_HOST &amp;lt;&amp;lt; EOF
               set -e
               set +x


               echo "Load NVM"
               export NVM_DIR="\$HOME/.nvm"
               [ -s "\$NVM_DIR/nvm.sh" ] &amp;amp;&amp;amp; . "\$NVM_DIR/nvm.sh"


               echo "----Use Node 24-----"
               nvm use 24


               echo "-----Change directory-----"
               cd $APP_DIR


               echo "----Pull latest code----"
               git pull origin prd


               echo "--npm install----"
               npm install


               echo "----Restart PM2-----"
               pm2 restart all


               echo "-----Deployment completed Successfully------"
             EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Practical Demonstration (Images Explained)&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"If it hurts, do it more often and automate it." - DevOps Principle&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Step 1. BEFORE: Original Login Page&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This shows the original application running at nodeapp.13.127.87.71.nip.io/login with the title "Login Page". This is the state before making any code changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jgccaeueynnujmis9p0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jgccaeueynnujmis9p0.png" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2. Making Changes &amp;amp; Pushing Code&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This terminal screenshot shows the developer workflow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9a8b2iq6pxyib2uevqrp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9a8b2iq6pxyib2uevqrp.png" alt=" " width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3. Bitbucket Pipelines Dashboard&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This shows the Pipelines page in Bitbucket with successful deployments:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxftcqbix0urv5htpfme.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxftcqbix0urv5htpfme.png" alt=" " width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The highlighted row #30 is the most recent deployment that was triggered automatically when code was pushed to the prd branch.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 4. AFTER: Updated Login Page&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This shows the application after successful deployment. The title has changed from "Login Page" to "Addweb Login Page" - confirming the CI/CD pipeline worked correctly!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nfc0oduzaxrbikpi6ij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nfc0oduzaxrbikpi6ij.png" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Pipeline Failure Scenario
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1. Intentionally Introduce an Error&lt;/strong&gt;&lt;br&gt;
To test pipeline failure behavior, we intentionally modified the package.json file by adding an invalid dependency:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fhgwyjn4jmknvdaqvxy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fhgwyjn4jmknvdaqvxy.png" alt=" " width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example change in package.json:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"this-package-does-not-exist-123": "1.0.0"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This package does not exist in the npm registry. the purpose was to simulate a real-world mistake such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Typo in package name&lt;/li&gt;
&lt;li&gt;Incorrect dependency version&lt;/li&gt;
&lt;li&gt;Invalid module added by mistake&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2. Commit and Push the Wrong Code&lt;/strong&gt;&lt;br&gt;
After modifying package.json, the changes were committed and pushed to the prd branch:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjfxjommpzenl0rsaxg3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjfxjommpzenl0rsaxg3.png" alt=" " width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;git commit -m "We mentioned the wrong package name in package.json."git push origin prd&lt;/p&gt;

&lt;p&gt;Since our Bitbucket Pipeline is configured to run automatically on the prd branch, this push immediately triggered a new pipeline execution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3. Pipeline Triggered Automatically&lt;/strong&gt;&lt;br&gt;
As expected, Bitbucket Pipelines started running automatically as soon as the code was pushed.&lt;/p&gt;

&lt;p&gt;In the Pipelines dashboard we can see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;New pipeline execution created&lt;/li&gt;
&lt;li&gt;Status initially shown as “In Progress”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ejiot5imvtwtb8kolsl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ejiot5imvtwtb8kolsl.png" alt=" " width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This confirms that the CI/CD automation is working correctly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4. Pipeline Execution Failed&lt;/strong&gt;&lt;br&gt;
During pipeline execution, the following command was executed on the server:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe72l0ee1oiexprp6zfv0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe72l0ee1oiexprp6zfv0.png" alt=" " width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;npm install &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because we added a non-existent package, the installation failed with this error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm error code E404
npm error 404 Not Found - GET https://registry.npmjs.org/this-package-does-not-exist-123
npm error 404 'this-package-does-not-exist-123@1.0.0' is not in this registry.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;As a result:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The pipeline step stopped&lt;/li&gt;
&lt;li&gt;Deployment process was aborted&lt;/li&gt;
&lt;li&gt;Application was NOT restarted&lt;/li&gt;
&lt;li&gt;Previous working version remained intact&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If any step in the CI/CD pipeline fails, the deployment automatically stops. This protects production from broken or unstable code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interesting Facts &amp;amp; Statistics
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;CI/CD can reduce deployment time by up to 70%
Source:-  &lt;a href="https://dev.to/msystech/70-faster-deployments-with-these-5-cicd-tools-op0#:~:text=A%20recent%20study%20highlights%20that,production%20environments%20quickly%20and%20efficiently."&gt;CI/CD up to 70%&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;PM2 improves Node.js app uptime close to 99.9%
Source:- &lt;a href="https://pm2.io/" rel="noopener noreferrer"&gt;PM2 uptime 99.9%&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Automated pipelines reduce deployment errors by 60-80%
Source:- &lt;a href="https://kansoftware.com/https-www-kansoft-com-blog-ci-cd-pipelines/#:~:text=For%20enterprises%2C%20CI/CD%20is,speed%2C%20quality%2C%20and%20reliability.&amp;amp;text=60%E2%80%9380%25%20reduction%20in%20release%20time." rel="noopener noreferrer"&gt;Automated by 60-80%&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;"CI/CD is not a tool, it’s a culture"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions (FAQs)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q1. Why use Bitbucket Pipelines?&lt;/strong&gt;&lt;br&gt;
Bitbucket Pipelines provides native CI/CD integration with repositories, reducing setup complexity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q2. Why PM2 instead of node directly?&lt;/strong&gt;&lt;br&gt;
PM2 ensures application stability, auto-restarts on failure, and better process control.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q3. Why use Nginx as a reverse proxy?&lt;/strong&gt;&lt;br&gt;
Nginx improves security, handles traffic efficiently, and allows SSL termination.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;CI/CD automates deployments and saves time&lt;/li&gt;
&lt;li&gt;Bitbucket Pipelines integrates seamlessly with repositories&lt;/li&gt;
&lt;li&gt;AWS EC2 provides flexible and scalable hosting&lt;/li&gt;
&lt;li&gt;PM2 ensures high availability of Node.js apps&lt;/li&gt;
&lt;li&gt;Nginx enhances security and request handling&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This POC successfully demonstrates a real-world CI/CD pipeline for a Node.js application using Bitbucket Pipelines and AWS EC2. The setup ensures reliable, automated, and scalable deployments with minimal manual intervention.By implementing this approach, teams can improve deployment confidence, reduce errors, and accelerate delivery - making it a strong foundation for production-grade applications.&lt;/p&gt;

&lt;p&gt;About the Author: &lt;em&gt;Narendra is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/" rel="noopener noreferrer"&gt;AddWebSolution&lt;/a&gt;, specializing in automating infrastructure to improve efficiency and reliability.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>pm2</category>
      <category>nginx</category>
      <category>aws</category>
    </item>
    <item>
      <title>How to Automate Vulnerability Scans with Trivy</title>
      <dc:creator>Narendra Chauhan</dc:creator>
      <pubDate>Mon, 24 Nov 2025 10:32:04 +0000</pubDate>
      <link>https://forem.com/addwebsolutionpvtltd/how-to-automate-vulnerability-scans-with-trivy-3b14</link>
      <guid>https://forem.com/addwebsolutionpvtltd/how-to-automate-vulnerability-scans-with-trivy-3b14</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"Manual scanning finds yesterday’s risks, automated scanning protects tomorrow’s releases&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;What Is Trivy?&lt;/li&gt;
&lt;li&gt;Why Vulnerability Scanning Matters&lt;/li&gt;
&lt;li&gt;Key Features of Trivy&lt;/li&gt;
&lt;li&gt;Installing and Setting Up Trivy&lt;/li&gt;
&lt;li&gt;How to Run a Manual Scan&lt;/li&gt;
&lt;li&gt;Automating Vulnerability Scans in CI/CD Pipelines&lt;/li&gt;
&lt;li&gt;Integrating Trivy with Docker and Kubernetes&lt;/li&gt;
&lt;li&gt;Interesting Facts &amp;amp; Statistics&lt;/li&gt;
&lt;li&gt;FAQs&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;As DevOps and security merge into DevSecOps, automation becomes key to maintaining secure and fast software delivery. One tool that has become essential in this space is Trivy, an open-source vulnerability scanner that detects security issues in container images, file systems, IaC (Infrastructure as Code) templates, and more.&lt;/p&gt;

&lt;p&gt;This guide walks you through how to automate vulnerability scans using Trivy, integrate it into CI/CD pipelines, and ensure your software stays secure — without slowing down your delivery cycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Trivy?
&lt;/h2&gt;

&lt;p&gt;Trivy (by Aqua Security) is a simple yet powerful open-source vulnerability scanner.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Container images&lt;/li&gt;
&lt;li&gt;File systems and repositories&lt;/li&gt;
&lt;li&gt;Infrastructure as Code (Terraform, Helm, etc.)&lt;/li&gt;
&lt;li&gt;Known vulnerabilities (CVEs)&lt;/li&gt;
&lt;li&gt;Misconfigurations&lt;/li&gt;
&lt;li&gt;Secret leaks&lt;/li&gt;
&lt;li&gt;Software Bill of Materials (SBOMs)
Trivy’s lightweight design and easy integration make it a favorite among DevOps teams looking for quick security insights.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Vulnerability Scanning Matters
&lt;/h2&gt;

&lt;p&gt;In 2025, over 85% of security breaches are linked to vulnerabilities in third-party components.&lt;br&gt;
With continuous deployment cycles, vulnerabilities can slip into production unnoticed. &lt;br&gt;
Automated scanning ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Early detection of issues&lt;/li&gt;
&lt;li&gt;Compliance with security standards&lt;/li&gt;
&lt;li&gt;Reduced risk of production exploits&lt;/li&gt;
&lt;li&gt;Faster incident response&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Key Features of Trivy
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Wide Coverage&lt;/strong&gt; – Scans OS packages, libraries, IaC, and Kubernetes manifests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fast &amp;amp; Lightweight&lt;/strong&gt; – Uses local caching to reduce scan times.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Comprehensive Security&lt;/strong&gt; – Detects CVEs, secrets, and config flaws.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD Ready&lt;/strong&gt; – Easily integrates with GitHub Actions, Jenkins, GitLab CI, and others.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SBOM Support&lt;/strong&gt; – Generates Software Bill of Materials in multiple formats (JSON, SPDX, CycloneDX).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Installing and Setting Up Trivy
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;For Linux/macOS:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;brew install aquasecurity/trivy/trivy&lt;/li&gt;
&lt;li&gt;sudo apt install trivy -y&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;For Docker-based:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;docker run --rm -v /var/run/docker.sock:/var/run/docker.sock aquasec/trivy image nginx:latest&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;To verify installation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;trivy --version&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  How to Run a Manual Scan
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scan a Docker image:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;trivy image nginx:latest&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Scan a local directory:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;trivy fs .&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Scan a Git repository:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;trivy repo https://{url}&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’ll get a detailed vulnerability report with severity levels: LOW, MEDIUM, HIGH, and CRITICAL.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Trivy turns vulnerability scanning from a task into a habit and from a habit into a safety net&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Automating Vulnerability Scans in CI/CD Pipelines
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Example: GitHub Actions&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Trivy Scan
on:
  push:
    branches: [ main ]

jobs:
  trivy-scan:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v3

      - name: Run Trivy vulnerability scanner
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: 'your-docker-image:latest'
          format: 'table'
          exit-code: '1'
          ignore-unfixed: true

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If any critical vulnerability is detected, the pipeline will fail, preventing unsafe code from deploying.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating Trivy with Docker and Kubernetes
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scan a Docker Image Before Push:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;trivy image myapp:latest&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Scan Running Pods in Kubernetes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;trivy k8s --report summary cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can even automate this using CronJobs in Kubernetes to perform daily scans and push results to Slack or email.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interesting Facts &amp;amp; Statistics
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Around 70% of organizations rely on open-source components that contain known vulnerabilities, making automated scanning essential for security. Sources :- &lt;a href="https://investor.synopsys.com/news/news-details/2024/New-Synopsys-Report-Finds-74-of-Codebases-Contained-High-Risk-Open-Source-Vulnerabilities-Surging-54-Since-Last-Year/default.aspx?" rel="noopener noreferrer"&gt;Open-source vulnerabilities&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Trivy supports more than 30 vulnerability databases, including major sources like GitHub Security Advisories and the NVD (National Vulnerability Database). Sources:- &lt;a href="https://github.blog/security/github-advisory-database-by-the-numbers-known-security-vulnerabilities-and-what-you-can-do-about-them/?" rel="noopener noreferrer"&gt;GitHub Security Advisories&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;A typical Trivy scan is highly efficient and can complete in under 60 seconds, making it suitable even for fast-paced CI/CD environments. Sources:- &lt;a href="https://www.gocodeo.com/post/trivy-scan-open-source-vulnerability-scanner-for-containers-and-code#:~:text=Why%20Developers%20love%20Trivy&amp;amp;text=Trivy%20is%20built%20for%20speed,moderately%20complex%20images%20or%20repositories.&amp;amp;text=Unlike%20traditional%20vulnerability%20scanners%20that,state%20is%20not%20an%20option.&amp;amp;text=Trivy%20doesn't%20just%20scan,Software%20licenses%20for%20compliance%20issues" rel="noopener noreferrer"&gt;Trivy scan is highly efficient&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;"A container without vulnerability scanning is a locked room with an open window."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Common Issues Explained
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Slow vulnerability scans&lt;/strong&gt;&lt;br&gt;
 This usually happens when Trivy is using an outdated local cache. Refreshing the cache resolves the issue.&lt;br&gt;
 &lt;strong&gt;Solution:&lt;/strong&gt; Run trivy --refresh to update the data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. False positives during scans&lt;/strong&gt;&lt;br&gt;
 This problem occurs if the vulnerability database is old or outdated.&lt;br&gt;
 &lt;strong&gt;Solution:&lt;/strong&gt; Always update the database before running scans by executing a Trivy DB update command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. CI/CD pipeline failures caused by Trivy&lt;/strong&gt;&lt;br&gt;
 If the pipeline keeps failing during scans, it often means the severity thresholds are too strict.&lt;br&gt;
 &lt;strong&gt;Solution:&lt;/strong&gt; Adjust the exit-code configuration or relax the severity filters to match your risk tolerance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Missing or unreported CVEs&lt;/strong&gt;&lt;br&gt;
 This can happen if the base image uses an OS that Trivy doesn't fully support.&lt;br&gt;
** Solution:** Check the container image’s base OS for compatibility or enable debug mode using --debug to identify the issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for Effective Scanning
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Use automation early in CI/CD — catch issues before builds.&lt;/li&gt;
&lt;li&gt;Regularly update the vulnerability database (trivy --download-db-only).&lt;/li&gt;
&lt;li&gt;Enable SBOM reports to enhance supply chain transparency.&lt;/li&gt;
&lt;li&gt;Integrate with notification tools (Slack, Teams) for quick alerts.&lt;/li&gt;
&lt;li&gt;Combine Trivy with policy enforcement tools like OPA or Kyverno.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q1: Is Trivy free to use?&lt;/strong&gt;&lt;br&gt;
 Yes, Trivy is completely open-source under the Apache 2.0 license.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q2: How often should I scan images?&lt;/strong&gt;&lt;br&gt;
 Ideally, before every deployment — or at least daily in production environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q3: Can Trivy scan for secrets?&lt;/strong&gt;&lt;br&gt;
 Yes, it detects secrets and sensitive credentials in code and configurations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q4: Does Trivy work offline?&lt;/strong&gt;&lt;br&gt;
 Yes, after downloading its vulnerability database locally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q5: How do I export Trivy reports?&lt;/strong&gt;&lt;br&gt;
 Use the -f json -o report.json flag for JSON reports or --format template for custom ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Trivy is a lightweight, powerful, and free vulnerability scanner.&lt;/li&gt;
&lt;li&gt;It integrates seamlessly with CI/CD pipelines and Kubernetes.&lt;/li&gt;
&lt;li&gt;Automating scans helps maintain continuous security without manual effort.&lt;/li&gt;
&lt;li&gt;Regular scanning and database updates minimize false positives.&lt;/li&gt;
&lt;li&gt;A proactive vulnerability management strategy ensures secure and compliant releases.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Security isn’t a one-time task — it’s a continuous process. By automating vulnerability scans with Trivy, DevOps teams can shift security left, identifying and fixing issues before deployment. Trivy’s speed, accuracy, and ease of integration make it one of the best tools for DevSecOps automation in 2025 and beyond. Start small, automate often, and let your pipelines protect your production.&lt;/p&gt;

&lt;p&gt;About the Author: &lt;em&gt;Narendra is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/" rel="noopener noreferrer"&gt;AddWebSolution&lt;/a&gt;,  specializing in automating infrastructure to improve efficiency and reliability.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>trivy</category>
      <category>vulnerabilityscanning</category>
      <category>devsecops</category>
      <category>securityautomation</category>
    </item>
    <item>
      <title>How to Debug Applications Running in Docker Containers</title>
      <dc:creator>Narendra Chauhan</dc:creator>
      <pubDate>Wed, 29 Oct 2025 10:09:52 +0000</pubDate>
      <link>https://forem.com/addwebsolutionpvtltd/how-to-debug-applications-running-in-docker-containers-4ego</link>
      <guid>https://forem.com/addwebsolutionpvtltd/how-to-debug-applications-running-in-docker-containers-4ego</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“Debugging is like being the detective in a crime movie where you are also the murderer.”  Filipe Fortes&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Why Debugging in Docker is Important&lt;/li&gt;
&lt;li&gt;Common Issues in Dockerized Applications&lt;/li&gt;
&lt;li&gt;Debugging Techniques

&lt;ul&gt;
&lt;li&gt; Using Docker Logs&lt;/li&gt;
&lt;li&gt; Interactive Container Shell&lt;/li&gt;
&lt;li&gt; Docker Exec for Live Debugging&lt;/li&gt;
&lt;li&gt; Network Troubleshooting&lt;/li&gt;
&lt;li&gt; Monitoring Tools&lt;/li&gt;
&lt;li&gt; Attaching Debuggers&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Best Practices for Debugging Docker Applications&lt;/li&gt;
&lt;li&gt;Interesting Facts &amp;amp; Statistics&lt;/li&gt;
&lt;li&gt;Frequently Asked Questions (FAQs)&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Docker has revolutionized software deployment by packaging applications with their dependencies in isolated containers. However, debugging applications running inside these containers can be challenging because the traditional debugging tools may not always be directly applicable.&lt;br&gt;
Debugging Docker applications involves understanding container logs, networking, runtime behavior, and sometimes interacting with the container in real-time to identify and fix issues efficiently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Debugging in Docker is Important
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Isolation Complexity:&lt;/strong&gt; Containers abstract the environment, making it harder to see system-level problems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Microservices Architecture:&lt;/strong&gt; Modern apps often run multiple containers, so an issue in one container can affect others.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Production Debugging:&lt;/strong&gt; Direct access to logs and live debugging in production containers helps reduce downtime.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Common Issues in Dockerized Applications
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Container Won’t Start&lt;/strong&gt; – Often caused by missing dependencies or configuration errors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application Crashes&lt;/strong&gt; – Can be due to code errors, unhandled exceptions, or memory issues.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Networking Failures&lt;/strong&gt; – Containers cannot communicate with each other or external services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Limitations&lt;/strong&gt; – Containers may be restricted by CPU, memory, or storage limits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Volume &amp;amp; File Permission Issues&lt;/strong&gt; – Files mounted in containers may have access issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Debugging Techniques
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;4.1 Using Docker Logs&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Command
&lt;code&gt;docker logs &amp;lt;container_name_or_id&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Use flags for continuous logs
&lt;code&gt;docker logs -f &amp;lt;container_name&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Helps in tracking application errors and warnings.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4.2 Interactive Container Shell&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access container shell for direct inspection:
&lt;code&gt;docker exec -it &amp;lt;container_name&amp;gt; /bin/bash&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Inspect files, environment variables, or run debugging commands.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4.3 Docker Exec for Live Debugging&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run scripts or commands directly inside a container without restarting it
&lt;code&gt;docker exec -it &amp;lt;container_name&amp;gt; python manage.py shell&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Useful for live debugging of services like Django, Node.js, or Java apps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4.4 Network Troubleshooting&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check connectivity between containers
&lt;code&gt;docker network ls
docker network inspect &amp;lt;network_name&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Use ping or curl inside containers to verify service accessibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4.5 Monitoring Tools&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker Stats:&lt;/strong&gt; Monitor CPU, memory, network, and I/O
&lt;code&gt;docker stats&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ctop:&lt;/strong&gt; Terminal UI for container metrics
&lt;code&gt;ctop&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prometheus + Grafana:&lt;/strong&gt;Advanced monitoring for container clusters.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4.6 Attaching Debuggers&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For Node.js
&lt;code&gt;node --inspect=0.0.0.0:9229 app.js&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;For Python: Use pdb or remote-pdb inside the container.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Best Practices for Debugging Docker Applications
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Use Logs Extensively&lt;/strong&gt; – Ensure proper logging in the app&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Minimize Container Complexity&lt;/strong&gt; – Smaller images with fewer layers are easier to debug.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reproduce Issues Locally&lt;/strong&gt; – Replicate production issues in local containers before debugging live.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automate Health Checks&lt;/strong&gt; – Use Docker health checks to catch issues early.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Document Environment Differences&lt;/strong&gt; – Know where dev, staging, and production differ.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Interesting Facts &amp;amp; Statistics
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Over 55% of companies in 2024 use Docker for production workloads.
Source:- &lt;a href="https://www.docker.com/blog/2025-docker-state-of-app-dev/" rel="noopener noreferrer"&gt;Docker for Production workloads&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Dockerized apps reduce deployment failures by up to 50% compared to traditional VMs.Source:- &lt;a href="https://supportfly.io/docker-vs-vm/" rel="noopener noreferrer"&gt;Dokerized vs VMs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Monitoring and proper logging can reduce debugging time by 30-40% in containerized environments.Source :- &lt;a href="https://kubernetes.io/docs/tasks/debug/" rel="noopener noreferrer"&gt;Monitoring logging and debugging&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;“Docker simplifies deployment but demands smarter debugging.” DevOps Engineer&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q1. Can I debug a stopped container?&lt;/strong&gt;&lt;br&gt;
 Yes, by committing it to a new image and starting it interactively:&lt;br&gt;
docker commit  debug-image&lt;br&gt;
docker run -it debug-image /bin/bash&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q2. How do I debug multi-container apps?&lt;/strong&gt;&lt;br&gt;
 Use docker-compose logs -f to aggregate logs and docker-compose exec  to inspect individual services&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q3. Is it safe to debug in production?&lt;/strong&gt;&lt;br&gt;
 Yes, but avoid making destructive changes. Always prefer logging, metrics, and read-only inspections when possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Logs and container shell access are your primary debugging tools.&lt;/li&gt;
&lt;li&gt;Networking and resource issues are common causes of container failures&lt;/li&gt;
&lt;li&gt;Always replicate production issues locally before live debugging&lt;/li&gt;
&lt;li&gt;Monitoring tools and proper logging drastically reduce troubleshooting time&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Debugging applications in Docker containers requires a combination of traditional debugging skills and container-specific techniques. By understanding container internals, using logs effectively, and leveraging interactive debugging tools, developers and DevOps engineers can quickly identify and resolve issues, ensuring stable and efficient deployments.&lt;/p&gt;

&lt;p&gt;About the Author: &lt;em&gt;Narendra is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/" rel="noopener noreferrer"&gt;AddWebSolution&lt;/a&gt;, specializing in automating infrastructure to improve efficiency and reliability.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>troubleshooting</category>
      <category>logging</category>
    </item>
    <item>
      <title>Troubleshooting Common DevOps Challenges</title>
      <dc:creator>Narendra Chauhan</dc:creator>
      <pubDate>Fri, 19 Sep 2025 06:32:39 +0000</pubDate>
      <link>https://forem.com/addwebsolutionpvtltd/troubleshooting-common-devops-challenges-4n0</link>
      <guid>https://forem.com/addwebsolutionpvtltd/troubleshooting-common-devops-challenges-4n0</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“Culture eats strategy for breakfast.” - Peter Drucker&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Common Challenges in DevOps&lt;/li&gt;
&lt;li&gt;Troubleshooting Strategies for Each Challenge&lt;/li&gt;
&lt;li&gt;Best Practices for Long-Term Success&lt;/li&gt;
&lt;li&gt;Real-World Case Studies&lt;/li&gt;
&lt;li&gt;Interesting Facts &amp;amp; Statistics&lt;/li&gt;
&lt;li&gt;FAQs&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;DevOps offers faster, more reliable software delivery by bridging development and operations. However, adopting DevOps comes with significant challenges. These challenges can be cultural, technical, or process-related. To gain maximum benefit from DevOps, organizations must learn to identify, troubleshoot, and overcome these obstacles effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Common Challenges in DevOps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cultural Resistance&lt;/strong&gt;&lt;br&gt;
Many organizations face hesitation from teams who are comfortable with traditional workflows. Fear of change, loss of control, and lack of trust can create resistance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tool Overload&lt;/strong&gt;&lt;br&gt;
The DevOps ecosystem is full of tools. Without proper selection and integration, tools can overwhelm teams and create inefficiencies instead of solving problems.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Communication Gaps&lt;/strong&gt;&lt;br&gt;
Siloed teams and unclear communication channels lead to misunderstandings, slower response times, and reduced productivity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security Concerns&lt;/strong&gt;&lt;br&gt;
When security is left as an afterthought, vulnerabilities appear late in the pipeline, causing major risks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scaling Issues&lt;/strong&gt;&lt;br&gt;
As applications grow, managing infrastructure, automation, and monitoring at scale becomes increasingly complex.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Legacy Systems&lt;/strong&gt;&lt;br&gt;
Older infrastructure and applications often don’t fit easily into modern DevOps pipelines, slowing down innovation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. Troubleshooting Strategies for Each Challenge
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Addressing Cultural Resistance&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Promote a culture of collaboration and shared responsibility.&lt;/li&gt;
&lt;li&gt;Provide training and workshops to educate teams.&lt;/li&gt;
&lt;li&gt;Encourage cross-functional teams to break down silos.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Managing Tool Overload&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Focus on tools that integrate well with existing workflows.&lt;/li&gt;
&lt;li&gt;Streamline tool usage by creating a centralized toolchain.
strategy.&lt;/li&gt;
&lt;li&gt;Regularly review and eliminate unnecessary tools.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Bridging Communication Gaps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Establish clear communication channels (Slack, Teams, Jira).&lt;/li&gt;
&lt;li&gt;Encourage daily stand-ups and retrospectives.&lt;/li&gt;
&lt;li&gt;Foster transparency with documentation and dashboards.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Handling Security Concerns&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shift security left by integrating DevSecOps practices.&lt;/li&gt;
&lt;li&gt;Automate vulnerability scanning and compliance checks.&lt;/li&gt;
&lt;li&gt;Provide security training for developers and operations teams.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tackling Scaling Issues&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use cloud-native architectures with containers and Kubernetes.&lt;/li&gt;
&lt;li&gt;Automate monitoring and scaling processes.&lt;/li&gt;
&lt;li&gt;Design infrastructure with scalability in mind from the start.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Dealing with Legacy Systems&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gradually modernize legacy infrastructure through
containerization or migration to the cloud.&lt;/li&gt;
&lt;li&gt;Use APIs and middleware to integrate old systems with modern
pipelines.&lt;/li&gt;
&lt;li&gt;Adopt incremental modernization instead of a full rewrite.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;“DevOps is not a goal, but a never-ending process of continual improvement.”- Jez Humble&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  4. Best Practices for Long-Term Success
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Build a DevOps culture first, before focusing on tools.&lt;/li&gt;
&lt;li&gt;Encourage continuous learning and feedback loops.&lt;/li&gt;
&lt;li&gt;Prioritize automation wherever possible.&lt;/li&gt;
&lt;li&gt;Implement metrics and monitoring to track progress and issues.&lt;/li&gt;
&lt;li&gt;Keep security and compliance integrated at every stage.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Real-World Case Studies
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Netflix:&lt;/strong&gt; Overcame scaling challenges by moving to a cloud-native architecture and leveraging chaos engineering.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Etsy:&lt;/strong&gt; Successfully addressed cultural resistance by fostering shared ownership of deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon:&lt;/strong&gt; Embedded security in its CI/CD pipelines, making DevSecOps a standard practice.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  6. Interesting Facts &amp;amp; Statistics
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Organizations with strong DevOps practices deploy code 46 times more frequently than those without (Puppet State of DevOps Report).
Source:- &lt;a href="https://www.agileanalytics.cloud/blog/the-hidden-costs-of-low-deployment-frequency-in-modern-devops" rel="noopener noreferrer"&gt;Deploy code 46 time&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;DevOps teams recover from failures 96 times faster compared to traditional teams.
Source:- &lt;a href="https://skylight.digital/thoughts/blog/achieve-devops-transformation-with-skylight-and-dora/" rel="noopener noreferrer"&gt;Failures 96 time faster&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;High-performing DevOps organizations spend 22% less time on unplanned work and rework.
Source:- &lt;a href="https://www.sourcefuse.com/resources/blog/devops-what-it-is-and-why-you-need-it/" rel="noopener noreferrer"&gt;High-performing DevOps&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;“You can’t buy DevOps; you have to live it.”- Patrick Debois&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  7. FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q1: What’s the hardest DevOps challenge to solve?&lt;/strong&gt;&lt;br&gt;
 Cultural resistance, because changing mindsets and habits takes time and effort.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q2: How can security be handled effectively in DevOps?&lt;/strong&gt;&lt;br&gt;
 By adopting DevSecOps and ensuring security is integrated early in the development lifecycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q3: What’s the best way to deal with legacy systems in DevOps?&lt;/strong&gt;&lt;br&gt;
 Gradually modernize them with cloud migration, containerization, or integration middleware.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;DevOps challenges are inevitable but solvable with the right strategies.&lt;/li&gt;
&lt;li&gt;Culture, communication, and collaboration matter as much as tools.&lt;/li&gt;
&lt;li&gt;Automation, scalability, and security must be built into DevOps pipelines.&lt;/li&gt;
&lt;li&gt;Incremental modernization helps handle legacy system constraints.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;“Automation applied to an inefficient operation will magnify the inefficiency.”- Bill Gates&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  9. Conclusion
&lt;/h2&gt;

&lt;p&gt;Troubleshooting DevOps challenges is not just about fixing issues — it’s about building resilience and adaptability. By addressing cultural, technical, and operational barriers, organizations can fully realize the promise of DevOps: faster, safer, and more innovative software delivery.&lt;/p&gt;

&lt;p&gt;About the Author: Narendra is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/devops-consulting" rel="noopener noreferrer"&gt;AddWebSolution&lt;/a&gt;,  specializing in automating infrastructure to improve efficiency and reliability.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>troubleshooting</category>
      <category>devsecops</category>
      <category>continuousdelivery</category>
    </item>
    <item>
      <title>Why DevOps Is a Culture, Not a Role</title>
      <dc:creator>Narendra Chauhan</dc:creator>
      <pubDate>Wed, 03 Sep 2025 11:52:26 +0000</pubDate>
      <link>https://forem.com/addwebsolutionpvtltd/why-devops-is-a-culture-not-a-role-2n89</link>
      <guid>https://forem.com/addwebsolutionpvtltd/why-devops-is-a-culture-not-a-role-2n89</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"DevOps is not a job title it's a mindset that transforms how teams build and operate software together."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;The Problem: Misconceptions Around DevOps&lt;/li&gt;
&lt;li&gt;DevOps as a Cultural Shift&lt;/li&gt;
&lt;li&gt;Why DevOps Isn’t a Job Title&lt;/li&gt;
&lt;li&gt;Interesting Stats&lt;/li&gt;
&lt;li&gt;Real-World Impacts&lt;/li&gt;
&lt;li&gt;FAQs&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;DevOps is one of the most transformative movements in software development—but it’s also one of the most misunderstood. Many organizations treat DevOps as a role or department. They hire “DevOps Engineers” and expect instant improvements in delivery speed, reliability, and collaboration.&lt;br&gt;
The truth? DevOps isn’t a role. It’s a culture.&lt;br&gt;
At its core, DevOps is about breaking down silos between development and operations, fostering shared responsibility, and enabling fast, safe, and continuous delivery of software. The success of DevOps depends not on a person, but on how teams work together.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. The Problem: Misconceptions Around DevOps
&lt;/h2&gt;

&lt;p&gt;Despite widespread adoption, DevOps is often misinterpreted:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;DevOps as a job title:&lt;/strong&gt; Companies hire a “DevOps Engineer” and expect them to singlehandedly transform the pipeline.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool-first mentality:&lt;/strong&gt; Teams focus on CI/CD tools but ignore culture, collaboration, and process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Siloed responsibility:&lt;/strong&gt; DevOps becomes a separate department rather than a shared philosophy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational handoffs persist:&lt;/strong&gt; Dev teams still throw code over the wall to ops, expecting them to run it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach misses the point of DevOps entirely. Without cultural change, tooling and roles can’t deliver the full benefits.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. DevOps as a Cultural Shift
&lt;/h2&gt;

&lt;p&gt;Real DevOps success starts with mindset and collaboration, not hiring or tooling. Here's what defines a true DevOps culture:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.1 Shared Responsibility&lt;/strong&gt;&lt;br&gt;
Both dev and ops teams are accountable for the software lifecycle—from code to customer. No more “not my job” mentalities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.2 Collaboration Over Handoffs&lt;/strong&gt;&lt;br&gt;
Teams work together from the start. Developers understand how their code runs in production. Ops teams understand what developers need to move fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.3 Continuous Feedback Loops&lt;/strong&gt;&lt;br&gt;
Monitoring, alerting, and observability give both sides visibility into system health. Developers respond to issues. Ops provides insights during planning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.4 Trust and Autonomy&lt;/strong&gt;&lt;br&gt;
Developers are empowered to deploy their own code. Ops provides safe infrastructure and guardrails. Teams trust each other to own their responsibilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.5 Learning from Failure&lt;/strong&gt;&lt;br&gt;
Instead of blame, teams run retrospectives. They identify process improvements and build resilience through shared learning.&lt;br&gt;
A true DevOps culture means everyone owns quality, performance, and delivery speed.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Why DevOps Isn’t a Job Title
&lt;/h2&gt;

&lt;p&gt;DevOps is not something a single person “does.” Here’s why thinking of DevOps as a role is misleading:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.1 It Reinforces Silos&lt;/strong&gt;&lt;br&gt;
Hiring a “DevOps person” often means giving one team the burden of managing infrastructure and automation—while everyone else continues as before.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.2 It Limits Collaboration&lt;/strong&gt;&lt;br&gt;
Teams may assume the “DevOps Engineer” handles deployment and monitoring, so developers disengage from ops work and vice versa.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.3 It Misses the Point&lt;/strong&gt;&lt;br&gt;
DevOps is about transforming how teams collaborate, not delegating work to a specialist. Every team member must adopt the mindset.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.4 DevOps Engineers Still Exist&lt;/strong&gt;&lt;br&gt;
That said, there’s value in roles that support DevOps culture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Site Reliability Engineers (SREs)&lt;/li&gt;
&lt;li&gt;Platform Engineers&lt;/li&gt;
&lt;li&gt;Infrastructure Engineers&lt;/li&gt;
&lt;li&gt;These roles enable and coach teams—but they don’t replace the cultural transformation.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;"You can hire a DevOps Engineer, but without cultural change, you’re just adding another silo."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  5. Interesting Stats
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;High-performing DevOps teams deploy 973x more frequently and recover 6,570x faster 
Source: &lt;a href="https://www.nvisia.com/insights/goals-of-devops" rel="noopener noreferrer"&gt;High performing DevOps&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;83% of developers say DevOps improves job satisfaction 
Source: &lt;a href="https://www.infoworld.com/article/2176947/agile-development-devops-adopters-your-trust-is-rewarded.html" rel="noopener noreferrer"&gt;Developers&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Companies with strong DevOps culture see 2x better customer satisfaction scores 
Source: &lt;a href="https://www.prnewswire.com/news-releases/study-finds-that-elevating-testing-improves-customer-experience-devops-maturity-301681284.html" rel="noopener noreferrer"&gt;customer satisfaction&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;DevOps practices reduce change failure rates by 3x 
Source: &lt;a href="https://www.opsera.io/blog/change-failure-rate" rel="noopener noreferrer"&gt;Change failure rates&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  6. Real-World Impacts
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;- Traditional Team&lt;/strong&gt;&lt;br&gt;
A developer finishes a feature, hands it off to ops, and moves on. If something breaks, the dev team says, “It worked on my machine.” The ops team scrambles to fix it without context. Tension grows. Blame circulates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- DevOps Culture Team&lt;/strong&gt;&lt;br&gt;
Developers and ops work together from the start. Deployment scripts, monitoring, and rollback plans are part of the pull request. When an issue arises, devs and ops troubleshoot together. They learn, adapt, and improve.&lt;br&gt;
&lt;strong&gt;Example: Netflix&lt;/strong&gt;&lt;br&gt;
Netflix embodies DevOps culture with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Full service ownership by dev teams&lt;/li&gt;
&lt;li&gt;Chaos engineering to test system resilience&lt;/li&gt;
&lt;li&gt;Strong feedback loops with real-time observability
This enables them to deploy thousands of times per day without compromising user experience.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;"DevOps succeeds when everyone owns quality, performance, and delivery, not just one person or team."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  8. FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q: Can small teams adopt DevOps culture?&lt;/strong&gt;&lt;br&gt;
 A: Absolutely. In fact, small teams often adopt DevOps principles faster due to fewer silos and more direct communication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Is DevOps the same as Agile?&lt;/strong&gt;&lt;br&gt;
 A: No, but they’re complementary. Agile focuses on iterative development; DevOps extends that mindset to delivery and operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Do I need a DevOps Engineer to do DevOps?&lt;/strong&gt;&lt;br&gt;
 A: Not necessarily. The focus should be on cultural and process changes. A dedicated role can help enable the shift but shouldn’t carry it alone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: How do I start building DevOps culture?&lt;/strong&gt;&lt;br&gt;
 A: Begin with shared goals, automate small tasks, introduce CI/CD, and improve communication between dev and ops teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  9. Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;DevOps is a culture, not a role or department&lt;/li&gt;
&lt;li&gt;It emphasizes shared ownership, collaboration, and fast feedback&lt;/li&gt;
&lt;li&gt;Hiring a “DevOps Engineer” isn’t enough—teams must adopt new behaviors&lt;/li&gt;
&lt;li&gt;Strong DevOps culture leads to faster releases, happier teams, and more resilient systems&lt;/li&gt;
&lt;li&gt;Start small, measure impact, and evolve continuously&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  10. Conclusion
&lt;/h2&gt;

&lt;p&gt;DevOps is a transformative force—but only when understood and implemented as a cultural change. Hiring someone with “DevOps” in their title won’t magically solve your delivery problems. What will? Building a culture of collaboration, continuous improvement, and shared responsibility.&lt;br&gt;
When DevOps becomes everyone’s job—not just a role—it becomes the foundation for high-performing teams and world-class software delivery.&lt;/p&gt;

&lt;p&gt;About the Author: &lt;em&gt;Narendra is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/devops-consulting" rel="noopener noreferrer"&gt;AddWebSolution&lt;/a&gt;,  specializing in automating infrastructure to improve efficiency and reliability.&lt;/em&gt; &lt;/p&gt;

</description>
      <category>devopsculture</category>
      <category>teamcollaboration</category>
      <category>softwaredelivery</category>
      <category>engineeringculture</category>
    </item>
    <item>
      <title>Creating a CI/CD Pipeline with GitLab CI</title>
      <dc:creator>Narendra Chauhan</dc:creator>
      <pubDate>Mon, 11 Aug 2025 06:48:09 +0000</pubDate>
      <link>https://forem.com/addwebsolutionpvtltd/creating-a-cicd-pipeline-with-gitlab-ci-eo</link>
      <guid>https://forem.com/addwebsolutionpvtltd/creating-a-cicd-pipeline-with-gitlab-ci-eo</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"If it hurts, do it more often." Jez Humble, Co-author of Continuous Delivery&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Why CI/CD Matters&lt;/li&gt;
&lt;li&gt;Creating a GitLab CI/CD Pipeline&lt;/li&gt;
&lt;li&gt;Sample .gitlab-ci.yml File&lt;/li&gt;
&lt;li&gt;Interesting Facts &amp;amp; Statistics&lt;/li&gt;
&lt;li&gt;FAQs&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;Continuous Integration and Continuous Deployment (CI/CD) has transformed the software development lifecycle, bringing automation, speed, and quality assurance to modern DevOps workflows.&lt;br&gt;
GitLab CI is a built-in tool within GitLab that allows developers to integrate and deploy their code automatically, offering a complete DevOps platform under a single UI.&lt;/p&gt;
&lt;h2&gt;
  
  
  2. Why CI/CD Matters
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Reduces Manual Errors: Automates builds, tests, and deployments.&lt;/li&gt;
&lt;li&gt;Speeds Up Development: Quick feedback loops for code changes.&lt;/li&gt;
&lt;li&gt;Improves Code Quality: Consistent testing and reviews before deployment.&lt;/li&gt;
&lt;li&gt;Enables DevOps Culture: Promotes collaboration and delivery.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  3. Creating a GitLab CI/CD Pipeline
&lt;/h2&gt;

&lt;p&gt;** Prerequisites**&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A GitLab project repository.&lt;/li&gt;
&lt;li&gt;Runner registered (Shared or Custom).&lt;/li&gt;
&lt;li&gt;.gitlab-ci.yml file in the root of your repo.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;1. Create a GitLab Repository&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;→ Go to GitLab → New Project → Initialize with README&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Set Up GitLab Runner&lt;/strong&gt;&lt;br&gt;
→ Install GitLab Runner on your server or use GitLab.com shared runners.&lt;br&gt;
Register using:&lt;br&gt;
→ gitlab-runner &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Add a .gitlab-ci.yml File&lt;/strong&gt;&lt;br&gt;
→ This file defines your CI/CD pipeline stages, jobs, and scripts.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;4. Define Pipeline *&lt;/em&gt;&lt;br&gt;
stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;build&lt;/li&gt;
&lt;li&gt;test&lt;/li&gt;
&lt;li&gt;deploy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Create Jobs for Each Stage&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;build-job:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stage: build
 script:
   - echo "Compiling code..."

test-job:
  stage: test
  script:
    - echo "Running tests..."
deploy-job:
  stage: deploy
  script:
    - echo "Deploying app..."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6. Push Code to Trigger Pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GitLab will detect the .gitlab-ci.yml file and run the pipeline automatically.&lt;br&gt;
 → Sample .gitlab-ci.yml File&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stages:
  - build
  - test
  - deploy

variables:
  APP_ENV: "production"

build:
  stage: build
  script:
    - npm install
    - npm run build

test:
  stage: test
  script:
    - npm test

deploy:
  stage: deploy
  only:
    - main
  script:
    - ./deploy.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Interesting Facts &amp;amp; Statistics
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Teams using CI/CD deliver 200x more frequently than those who don’t (Source: DORA Report).Source: &lt;a href="https://dora.dev/capabilities/continuous-delivery" rel="noopener noreferrer"&gt;CI/CD DORA&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Companies that adopt CI/CD reduce deployment failure rates by 75%. Source: &lt;a href="https://www.d3vtech.com/cloud-news/findings-2021-accelerate-state-of-devops-report/" rel="noopener noreferrer"&gt;Reduce deployment&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Over 60% of organizations now use CI/CD in their SDLC (Gartner, 2023). Source: &lt;a href="https://www.esparkinfo.com/blog/devops-statistics" rel="noopener noreferrer"&gt;Organizations&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitLab CI/CD can reduce pipeline build times by 30-40% with caching and parallelization. Source: &lt;a href="https://reintech.io/blog/optimizing-gitlab-ci-build-times" rel="noopener noreferrer"&gt;Optimizing build times&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;"CI/CD isn’t just automation, it's a shift in mindset toward responsibility and quality." Kelsey Hightower, Google Engineer&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  6. FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q1: Is GitLab CI/CD free to use?&lt;/strong&gt;&lt;br&gt;
 Yes, GitLab offers free CI/CD minutes with shared runners. Self-managed runners are unlimited.&lt;br&gt;
&lt;strong&gt;Q2: What languages are supported?&lt;/strong&gt;&lt;br&gt;
 GitLab CI/CD is language-agnostic. You can build pipelines for Node.js, Python, Java, Go, PHP, etc.&lt;br&gt;
&lt;strong&gt;Q3: Can I deploy to cloud services like AWS or Azure?&lt;/strong&gt;&lt;br&gt;
 GitLab CI integrates with AWS, GCP, Azure, and supports deployment via CLI or APIs.&lt;br&gt;
&lt;strong&gt;Q4: What happens if a job fails?&lt;/strong&gt;&lt;br&gt;
 The pipeline will stop (unless configured otherwise). You can inspect logs and rerun failed jobs.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;GitLab CI/CD is a powerful tool to automate your entire software delivery process.&lt;/li&gt;
&lt;li&gt;Pipelines are defined with a .gitlab-ci.yml file in your project’s root.&lt;/li&gt;
&lt;li&gt;Jobs are grouped into stages like build, test, and deploy.&lt;/li&gt;
&lt;li&gt;Runners are essential agents that execute your jobs.&lt;/li&gt;
&lt;li&gt;GitLab offers flexibility, integrations, and visibility across the DevOps lifecycle.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  8.Conclusion
&lt;/h2&gt;

&lt;p&gt;Creating a CI/CD pipeline with GitLab CI is not just a technical enhancement—it’s a fundamental step toward faster, safer, and more reliable software delivery. Whether you're a solo developer or managing a large engineering team, embracing CI/CD with GitLab boosts efficiency, reduces risk, and sets the stage for innovation.&lt;br&gt;
Hashtags &lt;/p&gt;

&lt;p&gt;About the Author: &lt;em&gt;Narendra is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/" rel="noopener noreferrer"&gt;AddWebSolution&lt;/a&gt;,  specializing in automating infrastructure to improve efficiency and reliability.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cicd</category>
      <category>automation</category>
      <category>continuousdeployment</category>
    </item>
    <item>
      <title>SSL/TLS Certificates with Certbot and Nginx: The 2025 Guide</title>
      <dc:creator>Narendra Chauhan</dc:creator>
      <pubDate>Fri, 25 Jul 2025 05:56:49 +0000</pubDate>
      <link>https://forem.com/addwebsolutionpvtltd/ssltls-certificates-with-certbot-and-nginx-the-2025-guide-57p3</link>
      <guid>https://forem.com/addwebsolutionpvtltd/ssltls-certificates-with-certbot-and-nginx-the-2025-guide-57p3</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“HTTPS is no longer a feature, it’s the foundation of trust on the web.”— Troy Hunt, Security Researcher&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;What Is SSL/TLS and Why It Matters&lt;/li&gt;
&lt;li&gt;What Is Certbot?&lt;/li&gt;
&lt;li&gt;How to Install an SSL Certificate with Certbot (Step-by-Step)&lt;/li&gt;
&lt;li&gt;Common Configurations and Auto-Renewals&lt;/li&gt;
&lt;li&gt;Key Stats &amp;amp; Interesting Facts&lt;/li&gt;
&lt;li&gt;FAQs&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;If you're running a website or web app in 2025 and it's not using HTTPS, you're doing it wrong. &lt;strong&gt;SSL/TLS certificates&lt;/strong&gt; are no longer optional — they’re expected, even by browsers.&lt;/p&gt;

&lt;p&gt;But the good news? It’s easier than ever to secure your websites using Certbot and Nginx — two powerful tools that make HTTPS setup simple and fast.&lt;/p&gt;

&lt;p&gt;In this guide, you’ll learn how to issue, install, and auto-renew an SSL certificate using Certbot, all within a few minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. What Is SSL/TLS and Why It Matters
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;SSL (Secure Sockets Layer) and TLS (Transport Layer Security)&lt;/strong&gt; are encryption protocols that keep data safe between your browser and server. When you visit a site with https://, you’re using TLS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why You Need It:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt; – Encrypts sensitive data like login details and payments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trust&lt;/strong&gt; – Boosts user confidence (padlock in the browser)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SEO&lt;/strong&gt; – Google favors HTTPS sites in rankings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance&lt;/strong&gt; – Required for sites handling personal or payment data&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;"The internet runs on trust — and HTTPS is its currency."&lt;br&gt;
 — Scott Helme, Web Security Specialist&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  3. What Is Certbot?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Certbot&lt;/strong&gt; is a free, open-source tool from the Electronic Frontier Foundation (EFF) that automates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Getting SSL/TLS certificates from Let’s Encrypt&lt;/li&gt;
&lt;li&gt;Configuring them with Nginx or Apache&lt;/li&gt;
&lt;li&gt;Renewing them before expiry&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short, Certbot + Let’s Encrypt = free HTTPS with zero hassle.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. How to Install an SSL Certificate with Certbot (Step-by-Step)
&lt;/h2&gt;

&lt;p&gt;Let’s break it down for &lt;strong&gt;Ubuntu + Nginx&lt;/strong&gt; setup:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Install Certbot&lt;/strong&gt;&lt;br&gt;
sudo apt update&lt;br&gt;
sudo apt install certbot python3-certbot-nginx&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Check Nginx is Running&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo systemctl status nginx&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If it's not active, start it:
sudo systemctl start nginx&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Run Certbot with Nginx Plugin&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo certbot --nginx -d yourdomain.com -d &lt;a href="http://www.yourdomain.com" rel="noopener noreferrer"&gt;www.yourdomain.com&lt;/a&gt;&lt;br&gt;
Certbot will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify your domain via HTTP challenge&lt;/li&gt;
&lt;li&gt;Update your Nginx config&lt;/li&gt;
&lt;li&gt;Reload Nginx with HTTPS settings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Test HTTPS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Visit: &lt;a href="https://yourdomain.com" rel="noopener noreferrer"&gt;https://yourdomain.com&lt;/a&gt;&lt;br&gt;
You should see the padlock in your browser&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Auto-Renewal (Hands-Free SSL Forever)
&lt;/h2&gt;

&lt;p&gt;Let’s Encrypt certificates are valid for 90 days. But Certbot can auto-renew them.&lt;/p&gt;

&lt;p&gt;Check the renewal process:&lt;br&gt;
sudo certbot renew --dry-run&lt;/p&gt;

&lt;p&gt;Certbot installs a cron job or systemd timer automatically, so you're covered.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Key Stats &amp;amp; Interesting Facts
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;300+ million websites use Let’s Encrypt (powered by Certbot)
Source:  Let’s Encrypt&lt;/li&gt;
&lt;li&gt;SSL boosts SEO rankings — confirmed by Google
Source:  SEO rankings&lt;/li&gt;
&lt;li&gt;HTTPS is now mandatory for all Chrome and Firefox features (e.g., geolocation, service workers)
Source: HTTPS&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;“Let’s Encrypt has helped democratize encryption — now anyone can secure their site in minutes.” — Josh Aas, Executive Director of ISRG (Let’s Encrypt)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  7. FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q1: Is Certbot free to use?&lt;/strong&gt;&lt;br&gt;
Yes, completely free. It works with Let’s Encrypt, which is a free certificate authority.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q2: Do I need a domain name?&lt;/strong&gt;&lt;br&gt;
Yes. Let’s Encrypt verifies ownership of real domains via DNS or HTTP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q3: Can I secure subdomains?&lt;/strong&gt;&lt;br&gt;
Absolutely. Just include them in the Certbot command:&lt;br&gt;
sudo certbot --nginx -d example.com -d api.example.com&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q4: What if I’m using Apache instead of Nginx?&lt;/strong&gt;&lt;br&gt;
Certbot has a plugin for Apache too:&lt;br&gt;
sudo certbot --apache&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q5: Will it break my Nginx config?&lt;/strong&gt;&lt;br&gt;
Certbot is safe and creates backups. But it's always good to run:&lt;br&gt;
sudo nginx -t&lt;/p&gt;

&lt;p&gt;before and after to validate changes&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;SSL/TLS is essential for modern web security, SEO, and trust&lt;/li&gt;
&lt;li&gt;Certbot makes HTTPS setup fast, free, and easy&lt;/li&gt;
&lt;li&gt;Works smoothly with Nginx and Apache on most Linux servers&lt;/li&gt;
&lt;li&gt;Auto-renewal ensures your certificate never expires&lt;/li&gt;
&lt;li&gt;You can go from HTTP to HTTPS in under 5 minutes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  9. Conclusion
&lt;/h2&gt;

&lt;p&gt;Securing your website doesn’t have to be complicated. With Certbot and Nginx, you can enable HTTPS in just a few commands and forget about certificate renewal worries.&lt;br&gt;
It’s 2025 — there’s no excuse to serve your app without encryption. Your users, your SEO, and your credibility depend on it.&lt;br&gt;
So go ahead — grab a Let’s Encrypt certificate and give your website the security badge it deserves.&lt;/p&gt;

&lt;p&gt;About the Author: &lt;em&gt;Narendra is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/" rel="noopener noreferrer"&gt;AddWebSolution&lt;/a&gt;,  specializing in automating infrastructure to improve efficiency and reliability.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>certbot</category>
      <category>letsencrypt</category>
      <category>ssl</category>
      <category>tls</category>
    </item>
  </channel>
</rss>
