<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Alan Varghese</title>
    <description>The latest articles on Forem by Alan Varghese (@alanvarghese-dev).</description>
    <link>https://forem.com/alanvarghese-dev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/alanvarghese-dev"/>
    <language>en</language>
    <item>
      <title>Build a Lightweight File Integrity Monitor with Bash, SQLite, and Docker</title>
      <dc:creator>Alan Varghese</dc:creator>
      <pubDate>Tue, 24 Mar 2026 10:57:51 +0000</pubDate>
      <link>https://forem.com/alanvarghese-dev/build-a-lightweight-file-integrity-monitor-with-bash-sqlite-and-docker-1al5</link>
      <guid>https://forem.com/alanvarghese-dev/build-a-lightweight-file-integrity-monitor-with-bash-sqlite-and-docker-1al5</guid>
      <description>&lt;p&gt;In the world of server security, &lt;strong&gt;File Integrity Monitoring (FIM)&lt;/strong&gt; is a critical layer of defense. It's the "silent alarm" that tells you when a configuration file, a system binary, or a sensitive database has been tampered with.&lt;/p&gt;

&lt;p&gt;While there are enterprise grade tools like Tripwire or OSSEC, sometimes you need something lightweight, transparent, and easy to deploy.&lt;/p&gt;

&lt;p&gt;In this post, I’ll walk you through a project I built: a &lt;strong&gt;Bash based File Integrity Checker&lt;/strong&gt; that uses &lt;strong&gt;SQLite&lt;/strong&gt; for baseline storage and &lt;strong&gt;Docker&lt;/strong&gt; for a fully isolated testing environment.&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 The Core Concept: Baseline vs. Reality
&lt;/h2&gt;

&lt;p&gt;The tool works on a simple but powerful principle:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Initialize (&lt;code&gt;--init&lt;/code&gt;)&lt;/strong&gt;: Scan your critical files, calculate their &lt;strong&gt;SHA-256 hashes&lt;/strong&gt;, and store them in a persistent SQLite database. This is your "known-good" state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check (&lt;code&gt;--check&lt;/code&gt;)&lt;/strong&gt;: Periodically re-scan those same files. If a single bit has changed, the hashes won't match, and an alert is triggered.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  🛠️ The Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bash&lt;/strong&gt;: The engine. It handles the file traversal, hashing logic, and alerting.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQLite3&lt;/strong&gt;: Instead of messy text files, I used SQLite to store the baseline. It’s fast, structured, and handles hundreds of files with ease.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker &amp;amp; Docker Compose&lt;/strong&gt;: To make testing easy, I containerized the entire app.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MailHog&lt;/strong&gt;: A "superpower" for development. It's a mock SMTP server that catches all outgoing alert emails so you can verify them in a web UI without spamming your real inbox.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔍 A Deep Dive into the Code
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. The Hashing Logic
&lt;/h3&gt;

&lt;p&gt;We use &lt;code&gt;sha256sum&lt;/code&gt; to ensure high cryptographic security. Even a tiny change to a file (like adding a space) results in a completely different hash.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Calculate hash and store in database&lt;/span&gt;
&lt;span class="nv"&gt;current_hash&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;sha256sum&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$file_path&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $1}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
sqlite3 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DB_PATH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"INSERT INTO file_hashes (file_path, hash) VALUES ('&lt;/span&gt;&lt;span class="nv"&gt;$file_path&lt;/span&gt;&lt;span class="s2"&gt;', '&lt;/span&gt;&lt;span class="nv"&gt;$current_hash&lt;/span&gt;&lt;span class="s2"&gt;');"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Multi Layered Alerting
&lt;/h3&gt;

&lt;p&gt;One of the biggest challenges in shell scripting is ensuring email delivery. My script uses a "fallback" strategy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Primary&lt;/strong&gt;: &lt;code&gt;ssmtp&lt;/code&gt; (sendmail) via the local system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secondary&lt;/strong&gt;: &lt;code&gt;curl&lt;/code&gt; to talk directly to the &lt;strong&gt;MailHog API&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Last Resort&lt;/strong&gt;: Standard &lt;code&gt;mail&lt;/code&gt; command.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🐳 The Docker Testing Workflow
&lt;/h2&gt;

&lt;p&gt;Testing a security tool can be nerve wracking you don't want to accidentally modify your host's &lt;code&gt;/etc/&lt;/code&gt; files! That’s where Docker shines. &lt;/p&gt;

&lt;p&gt;I created a &lt;code&gt;docker-compose.yml&lt;/code&gt; that spins up:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An &lt;strong&gt;Ubuntu-based checker&lt;/strong&gt; container with mock system files.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;MailHog&lt;/strong&gt; container for email visualization.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  How to test a "Malicious" change:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Start the environment:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create the baseline:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; file_integrity_checker bash /app/file_integrity_checker.sh &lt;span class="nt"&gt;--init&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Simulate a hack:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; file_integrity_checker bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"echo 'MALICIOUS_CODE' &amp;gt;&amp;gt; /etc/myapp/config/app.yaml"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Run the check:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; file_integrity_checker bash /app/file_integrity_checker.sh &lt;span class="nt"&gt;--check&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;See the alert:&lt;/strong&gt; Open &lt;code&gt;http://localhost:8025&lt;/code&gt; and watch the security alert arrive in your MailHog inbox!&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  💡 Key Lessons Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Volume Mounts &amp;amp; Permissions&lt;/strong&gt;: When mounting files from macOS to a Linux container, file permissions (like the execute bit) can be tricky. I learned that executing via &lt;code&gt;bash &amp;lt;script&amp;gt;&lt;/code&gt; is often more robust than relying on the &lt;code&gt;+x&lt;/code&gt; bit in a shared volume.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architecture Matters&lt;/strong&gt;: Adding &lt;code&gt;platform: linux/amd64&lt;/code&gt; to the &lt;code&gt;docker-compose.yml&lt;/code&gt; was essential for ensuring the MailHog image (which is AMD64 only) runs smoothly on modern ARM64 chips like the Apple M1/M2.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🏁 Conclusion
&lt;/h2&gt;

&lt;p&gt;Building your own security tools is one of the best ways to understand how systems work. This project taught me about hashing, database persistence in shell, and the power of Docker for creating reproducible security labs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's next for this project?&lt;/strong&gt; I'm looking into adding Slack/Discord webhook support and real time monitoring via &lt;code&gt;inotify&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Thanks for reading! If you have questions about Bash security or Docker setups, let's chat in the comments! 🛡️&lt;/p&gt;

</description>
      <category>security</category>
      <category>bash</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Monitoring Remote Network Health: Building a Lightweight Connectivity Tool with Bash and Docker</title>
      <dc:creator>Alan Varghese</dc:creator>
      <pubDate>Mon, 09 Mar 2026 12:26:06 +0000</pubDate>
      <link>https://forem.com/alanvarghese-dev/monitoring-remote-network-health-building-a-lightweight-connectivity-tool-with-bash-and-docker-3eok</link>
      <guid>https://forem.com/alanvarghese-dev/monitoring-remote-network-health-building-a-lightweight-connectivity-tool-with-bash-and-docker-3eok</guid>
      <description>&lt;p&gt;Have you ever needed to verify if your remote servers have outbound internet access? Whether you're managing a cluster of web servers or a set of edge devices, ensuring they can "talk to the world" is a fundamental task.&lt;/p&gt;

&lt;p&gt;In this post, I'll walk you through a lightweight &lt;strong&gt;Network Connectivity Monitoring Tool&lt;/strong&gt; I built using Bash. It's simple, portable, and comes with a Docker-based test environment to get you started safely.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Manually SSH-ing into dozens of servers to run a &lt;code&gt;ping&lt;/code&gt; command is tedious and error-prone. I needed a way to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Check connectivity from &lt;em&gt;multiple&lt;/em&gt; servers at once.&lt;/li&gt;
&lt;li&gt; Support non-standard SSH ports.&lt;/li&gt;
&lt;li&gt; Log results for historical tracking.&lt;/li&gt;
&lt;li&gt; Get a quick summary of which hosts are "Reachable" vs "Unreachable."&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Solution: A Bash-Powered Monitor
&lt;/h2&gt;

&lt;p&gt;The core of this tool is a Bash script that uses SSH to execute a ping command on remote targets. Here's a look at the key features and how it works.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Robust Scripting with &lt;code&gt;set -euo pipefail&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;To make the script reliable, I used strict error handling:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-e&lt;/code&gt;: Exit immediately if a command fails.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-u&lt;/code&gt;: Treat unset variables as errors.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-o pipefail&lt;/code&gt;: Ensure that if any part of a pipeline fails, the whole pipeline fails.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Intelligent Connectivity Checks
&lt;/h3&gt;

&lt;p&gt;The script doesn't just check if the server is "up"; it distinguishes between an &lt;strong&gt;SSH failure&lt;/strong&gt; (can't connect to the box) and a &lt;strong&gt;Network failure&lt;/strong&gt; (connected to the box, but the box can't reach the internet).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# A snippet of the core logic&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;ssh &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;StrictHostKeyChecking&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;no &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;BatchMode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;port&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;ssh_target&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
       &lt;span class="s2"&gt;"ping -c 3 8.8.8.8 &amp;gt; /dev/null 2&amp;gt;&amp;amp;1"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;log_result &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"REACHABLE"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TIMESTAMP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"  ✓ &lt;/span&gt;&lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="s2"&gt; - REACHABLE"&lt;/span&gt;
&lt;span class="k"&gt;else
    &lt;/span&gt;log_result &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"UNREACHABLE"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TIMESTAMP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"  ✗ &lt;/span&gt;&lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="s2"&gt; - UNREACHABLE (network issue)"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Automated Logging &amp;amp; Summaries
&lt;/h3&gt;

&lt;p&gt;Every run generates a timestamped log in &lt;code&gt;connectivity_log.txt&lt;/code&gt;, making it easy to spot patterns over time. At the end of each run, you get a clean summary:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;========================================
  Summary
========================================
  Total hosts checked: 3
  Reachable:          2
  Unreachable:        1
========================================
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Testing Safely with Docker
&lt;/h2&gt;

&lt;p&gt;One of the best parts of this project is the included &lt;strong&gt;Test Lab&lt;/strong&gt;. Using &lt;code&gt;docker-compose&lt;/code&gt;, you can spin up three Ubuntu-based SSH servers locally to test the script without risking your production environment.&lt;/p&gt;

&lt;p&gt;I've included a &lt;code&gt;setup.sh&lt;/code&gt; script that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Starts the containers.&lt;/li&gt;
&lt;li&gt;Installs &lt;code&gt;ping&lt;/code&gt; (not present in minimal Ubuntu images).&lt;/li&gt;
&lt;li&gt;Configures SSH keys for passwordless login.&lt;/li&gt;
&lt;li&gt;Validates the environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to Try It Out
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Clone the project.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Generate an SSH key&lt;/strong&gt; if you haven't already: &lt;code&gt;ssh-keygen -t rsa&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Launch the test lab&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./setup.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Run the monitor&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./monitor_connectivity.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;p&gt;While building this, I ran into a few classic Bash "gotchas":&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Loops vs &lt;code&gt;set -e&lt;/code&gt;&lt;/strong&gt;: Using a &lt;code&gt;while read&lt;/code&gt; loop with &lt;code&gt;set -e&lt;/code&gt; can cause the script to exit prematurely when it hits the end of a file. Switching to a &lt;code&gt;for&lt;/code&gt; loop with &lt;code&gt;grep&lt;/code&gt; solved this.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Ephemerality&lt;/strong&gt;: Since containers reset on restart, I automated the SSH key injection into the &lt;code&gt;setup.sh&lt;/code&gt; script to ensure a smooth developer experience.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Sometimes, the simplest tools are the most effective. This Bash script provides a no-nonsense way to monitor network health across your infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check out the full source code here!&lt;/strong&gt; &lt;a href="https://github.com/alanvarghese-dev/Bash_Scripting/tree/main/network_connectivity_monitoring_tool" rel="noopener noreferrer"&gt;github&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What are your favorite "mini-tools" for server management? Let me know in the comments!&lt;/p&gt;

</description>
      <category>bash</category>
      <category>devops</category>
      <category>automation</category>
      <category>linux</category>
    </item>
    <item>
      <title>How I Built a Lightweight Cron Job Health Monitor with Bash and Docker</title>
      <dc:creator>Alan Varghese</dc:creator>
      <pubDate>Sun, 08 Mar 2026 18:16:58 +0000</pubDate>
      <link>https://forem.com/alanvarghese-dev/how-i-built-a-lightweight-cron-job-health-monitor-with-bash-and-docker-16mh</link>
      <guid>https://forem.com/alanvarghese-dev/how-i-built-a-lightweight-cron-job-health-monitor-with-bash-and-docker-16mh</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Tired of silent cron failures? Here's a lightweight Bash-based solution to monitor and alert on your scheduled tasks across multiple servers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Why Your Cron Jobs Need Monitoring
&lt;/h1&gt;

&lt;p&gt;We’ve all been there. You set up a "mission-critical" backup or data sync as a cron job, and then you forget about it. Six months later, you realize it hasn't run in weeks because of a silent failure, a disk space issue, or an SSH key change.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cron is great for execution, but it's terrible at visibility.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this post, I'll walk through a lightweight, Bash-powered monitoring solution I built to keep tabs on cron jobs across multiple servers without needing a heavy agent like Zabbix or Datadog.&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠 The Architecture
&lt;/h2&gt;

&lt;p&gt;The goal was simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Discover&lt;/strong&gt; jobs automatically from remote servers via SSH.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitor&lt;/strong&gt; their last execution time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alert&lt;/strong&gt; via Slack or Email if a job is "missed" or overdue.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test&lt;/strong&gt; everything locally using Docker.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌───────────────────────────────┐
│        Monitoring Host        │
│  (Bash + cron_health_monitor) │
└───────────────┬───────────────┘
                │
        ┌───────┴───────┐
        ▼               ▼
  ┌──────────┐    ┌──────────┐
  │ Server A │    │ Server B │
  │ (Docker) │    │ (Docker) │
  └──────────┘    └──────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🚀 Key Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Automated SSH Discovery
&lt;/h3&gt;

&lt;p&gt;The script scans &lt;code&gt;crontab -l&lt;/code&gt; on remote servers, parses the schedules, and automatically adds them to its tracking list. No more manual entry for every single task.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. State-Based Tracking
&lt;/h3&gt;

&lt;p&gt;Instead of checking logs (which can be messy), the monitor looks at "last run" timestamps. Jobs can report their own completion via a simple CLI command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./cron_health_monitor.sh record backup_job server1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Dockerized Testing Environment
&lt;/h3&gt;

&lt;p&gt;To ensure the monitor works before deploying it to production, I included a &lt;code&gt;docker-compose.yml&lt;/code&gt; that spins up three Ubuntu servers. This allows you to simulate real-world cron failures in a safe sandbox.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧩 The "Aha!" Moments (and Bugs)
&lt;/h2&gt;

&lt;p&gt;While building this, I ran into a few classic engineering hurdles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;The Date Dilemma&lt;/strong&gt;: I initially used BSD-style &lt;code&gt;date&lt;/code&gt; commands (macOS default), which broke completely on the Linux target servers. I had to switch to the more universal GNU &lt;code&gt;date -d&lt;/code&gt; syntax.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Docker Networking&lt;/strong&gt;: On macOS, connecting to &lt;code&gt;localhost&lt;/code&gt; inside a container can be tricky. Switching to &lt;code&gt;127.0.0.1&lt;/code&gt; fixed several "Host not found" errors.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Silent Failures&lt;/strong&gt;: I learned that if the SSH connection fails, the script should alert on the &lt;em&gt;connection&lt;/em&gt; failure, not just the missing cron job.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🛠 Quick Start
&lt;/h2&gt;

&lt;p&gt;If you want to try it out:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Clone &amp;amp; Setup&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   ./setup.sh &lt;span class="c"&gt;# Generates keys &amp;amp; starts Docker test servers&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Discover Jobs&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   ./cron_health_monitor.sh discover
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Run Health Check&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   ./cron_health_monitor.sh check
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  💡 Lessons Learned
&lt;/h2&gt;

&lt;p&gt;Bash is incredibly powerful for infrastructure glue code. By combining standard tools like &lt;code&gt;ssh&lt;/code&gt;, &lt;code&gt;awk&lt;/code&gt;, and &lt;code&gt;grep&lt;/code&gt;, you can build a monitoring system that is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Zero-agent&lt;/strong&gt;: Nothing to install on target servers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low-overhead&lt;/strong&gt;: Runs in milliseconds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Portable&lt;/strong&gt;: Works on almost any Linux distro.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check out the full source code and my "bug log" in the repository!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/alanvarghese-dev/Bash_Scripting/tree/main/cron_job_health_monitor" rel="noopener noreferrer"&gt;https://github.com/alanvarghese-dev/Bash_Scripting/tree/main/cron_job_health_monitor&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What are you using to monitor your legacy cron jobs? Let's discuss in the comments!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>automation</category>
      <category>bash</category>
      <category>linux</category>
    </item>
    <item>
      <title>Stop SSH-ing One by One: Building a Parallel Command Executor in Bash</title>
      <dc:creator>Alan Varghese</dc:creator>
      <pubDate>Thu, 05 Mar 2026 20:29:19 +0000</pubDate>
      <link>https://forem.com/alanvarghese-dev/stop-ssh-ing-one-by-one-building-a-parallel-command-executor-in-bash-55m1</link>
      <guid>https://forem.com/alanvarghese-dev/stop-ssh-ing-one-by-one-building-a-parallel-command-executor-in-bash-55m1</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Learn how to build a robust, multi-server SSH command runner using Bash, Docker, and parallel processing.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As developers or system administrators, we've all been there: You need to check the disk space, uptime, or service status on 10 different servers.&lt;/p&gt;

&lt;p&gt;The "manual" way is painful:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;ssh user@server1&lt;/code&gt; -&amp;gt; &lt;code&gt;df -h&lt;/code&gt; -&amp;gt; &lt;code&gt;exit&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ssh user@server2&lt;/code&gt; -&amp;gt; &lt;code&gt;df -h&lt;/code&gt; -&amp;gt; &lt;code&gt;exit&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;...repeat 8 more times. 😫&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Sure, tools like &lt;strong&gt;Ansible&lt;/strong&gt; exist, but sometimes you just want a lightweight, zero-dependency script to fire off a quick command and see what's happening &lt;em&gt;right now&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;In this post, I'll walk you through how I built a &lt;strong&gt;Multi-Server SSH Executor&lt;/strong&gt; using pure Bash. We'll explore parallel processing, robust file parsing, and how to simulate a server cluster locally using Docker.&lt;/p&gt;

&lt;h2&gt;
  
  
  🎯 The Goal
&lt;/h2&gt;

&lt;p&gt;We want a script that takes a command (e.g., &lt;code&gt;uptime&lt;/code&gt;) and runs it on a list of servers defined in a config file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Requirements:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Parallel Execution:&lt;/strong&gt; Use threading (background processes) so checking 10 servers takes as long as the slowest one, not the sum of all of them.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Robust Config Parsing:&lt;/strong&gt; Handle comments, weird whitespace, and different ports/users.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Local Testing Ground:&lt;/strong&gt; A way to test this without buying 5 VPS instances (spoiler: we use Docker).&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  🏗️ The Architecture
&lt;/h2&gt;

&lt;p&gt;The project consists of three main parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;servers.conf&lt;/code&gt;: A simple file defining our target servers.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;multi_ssh.sh&lt;/code&gt;: The brains of the operation.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker-compose.yml&lt;/code&gt;: A simulated lab environment with 4 SSH-enabled containers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  1. The Configuration
&lt;/h3&gt;

&lt;p&gt;I wanted a simple format that's easy to read but flexible:&lt;br&gt;
&lt;code&gt;name:hostname:port:username&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Production Servers
web01:192.168.1.10:22:admin
db01:192.168.1.20:22:dbadmin

# Docker Lab (Localhost mapped ports)
web1:localhost:2221:root
web2:localhost:2222:root
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. The Simulation (Docker Lab)
&lt;/h3&gt;

&lt;p&gt;Testing SSH scripts on production servers is... brave. Instead, I used &lt;code&gt;docker-compose&lt;/code&gt; to spin up lightweight Ubuntu containers running &lt;code&gt;sshd&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;web1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rastasheep/ubuntu-sshd:18.04&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2221:22"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;web2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rastasheep/ubuntu-sshd:18.04&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2222:22"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now I have "real" servers running on localhost ports 2221, 2222, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚡ The "Secret Sauce": Parallelism in Bash
&lt;/h2&gt;

&lt;p&gt;The core challenge is running commands simultaneously. In Bash, we do this by putting a command in the background with &lt;code&gt;&amp;amp;&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here is the simplified logic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Loop through servers&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;server &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;servers&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="c"&gt;# Run SSH in the background&lt;/span&gt;
    ssh &lt;span class="nv"&gt;$user&lt;/span&gt;@&lt;span class="nv"&gt;$host&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$command&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"/tmp/result_&lt;/span&gt;&lt;span class="nv"&gt;$server&lt;/span&gt;&lt;span class="s2"&gt;.txt"&lt;/span&gt; &amp;amp;

    &lt;span class="c"&gt;# Save the Process ID (PID)&lt;/span&gt;
    pids+&lt;span class="o"&gt;=(&lt;/span&gt;&lt;span class="nv"&gt;$!&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;

&lt;span class="c"&gt;# Wait for all background jobs to finish&lt;/span&gt;
&lt;span class="nb"&gt;wait&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simple trick reduces execution time from &lt;strong&gt;(N * Timeout)&lt;/strong&gt; to &lt;strong&gt;(Max(Timeout))&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧠 Lessons Learned &amp;amp; "Gotchas"
&lt;/h2&gt;

&lt;p&gt;Writing the script revealed a few common Bash pitfalls that I had to fix to make it production-ready.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lesson 1: &lt;code&gt;for&lt;/code&gt; loops vs. &lt;code&gt;while read&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Initially, I used a &lt;code&gt;for&lt;/code&gt; loop to read lines from the config file.&lt;br&gt;
&lt;strong&gt;The Trap:&lt;/strong&gt; If a line has spaces (like a description), &lt;code&gt;for&lt;/code&gt; splits it into multiple items.&lt;br&gt;
&lt;strong&gt;The Fix:&lt;/strong&gt; Use a &lt;code&gt;while&lt;/code&gt; loop with a custom Internal Field Separator (&lt;code&gt;IFS&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Robust way to read lines&lt;/span&gt;
&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nv"&gt;IFS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;':'&lt;/span&gt; &lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; name &lt;span class="nb"&gt;hostname &lt;/span&gt;port username &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$name&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="c"&gt;# Process server...&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt; &amp;lt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$config_file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Note the &lt;code&gt;|| [[ -n "$name" ]]&lt;/code&gt; part—this ensures we don't skip the last line if the file doesn't end with a newline character!&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Lesson 2: Race Conditions &amp;amp; Temp Files
&lt;/h3&gt;

&lt;p&gt;When running parallel jobs, you can't just write to &lt;code&gt;output.txt&lt;/code&gt;. Multiple processes will write at the same time, garbling the text.&lt;br&gt;
&lt;strong&gt;The Fix:&lt;/strong&gt; Give each process its own temporary file (e.g., &lt;code&gt;/tmp/ssh_result_web1.txt&lt;/code&gt;), let them finish, and &lt;em&gt;then&lt;/em&gt; aggregate the results sequentially.&lt;/p&gt;

&lt;p&gt;I used &lt;code&gt;mktemp&lt;/code&gt; to ensure my temporary files never collided with other running instances of the script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;SERVERS_LIST_TMP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;mktemp&lt;/span&gt; /tmp/ssh_multi_servers.XXXXXX&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Lesson 3: SSH is picky
&lt;/h3&gt;

&lt;p&gt;Running SSH non-interactively requires specific flags to avoid hanging:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;-o BatchMode=yes&lt;/code&gt;: Fail instead of asking for a password.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;-o ConnectTimeout=X&lt;/code&gt;: Don't wait forever if a server is down.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;-o StrictHostKeyChecking=no&lt;/code&gt;: Crucial for automated environments where IPs might change (like Docker containers).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🚀 The Result
&lt;/h2&gt;

&lt;p&gt;Running &lt;code&gt;./multi_ssh.sh "df -h"&lt;/code&gt; gives me a beautiful, color-coded summary of disk space across my entire fleet in seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  📥 Try It Yourself
&lt;/h2&gt;

&lt;p&gt;I've open-sourced this tool along with the setup script that automatically generates SSH keys and configures the Docker containers for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Docker &amp;amp; Docker Compose&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;sshpass&lt;/code&gt; (for the initial setup script)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Installation:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/alanvarghese-dev/Bash_Scripting/tree/main/ssh_multi_server_executor.git

&lt;span class="nb"&gt;cd &lt;/span&gt;ssh-multi-server-executor
./ssh_install.sh  &lt;span class="c"&gt;# Sets up the Docker lab&lt;/span&gt;
./multi_ssh.sh &lt;span class="s2"&gt;"uptime"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let me know in the comments if you prefer Bash for these tasks or if you stick to heavier tools like Ansible!&lt;/p&gt;

&lt;p&gt;Happy scripting! 💻✨&lt;/p&gt;

</description>
      <category>bash</category>
      <category>linux</category>
      <category>devops</category>
      <category>automation</category>
    </item>
    <item>
      <title>Automating User Management in Linux with Bash Scripts</title>
      <dc:creator>Alan Varghese</dc:creator>
      <pubDate>Mon, 02 Mar 2026 05:17:16 +0000</pubDate>
      <link>https://forem.com/alanvarghese-dev/automating-user-management-in-linux-with-bash-scripts-d70</link>
      <guid>https://forem.com/alanvarghese-dev/automating-user-management-in-linux-with-bash-scripts-d70</guid>
      <description>&lt;p&gt;As a DevOps engineer or system administrator, you often find yourself performing repetitive tasks. One of the most common is managing user accounts—especially when onboarding a new team or cleaning up after a project. &lt;/p&gt;

&lt;p&gt;Manually running &lt;code&gt;useradd&lt;/code&gt; for 20 people isn't just boring; it's prone to errors. That's why I built a simple &lt;strong&gt;User Management Automation&lt;/strong&gt; tool using Bash.&lt;/p&gt;

&lt;p&gt;In this post, I'll walk you through how these scripts work and how you can use them to streamline your workflow.&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 The Goal
&lt;/h2&gt;

&lt;p&gt;The objective was to create a system that can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Read a list of usernames from a text file.&lt;/li&gt;
&lt;li&gt;Bulk create users with a default password and force a password change on first login.&lt;/li&gt;
&lt;li&gt;Bulk delete users and their home directories.&lt;/li&gt;
&lt;li&gt;Log every action for auditing purposes.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  🛠️ The Scripts
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. The User List (&lt;code&gt;users.txt&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;Instead of hardcoding names, we use a simple text file. Just add one username per line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dev1
dev2
ronald
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. User Creation (&lt;code&gt;create_users.sh&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;This script handles the heavy lifting of onboarding. It checks if a user exists, creates them if they don't, sets a temporary password, and expires it immediately to ensure security.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="nv"&gt;USER_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"users.txt"&lt;/span&gt;
&lt;span class="nv"&gt;PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"DevOps@1234!"&lt;/span&gt;
&lt;span class="nv"&gt;LOG_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"user_creation.log"&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"User Creation Started: &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;

&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;read &lt;/span&gt;USERNAME
&lt;span class="k"&gt;do
    if &lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$USERNAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &amp;amp;&amp;gt;/dev/null
    &lt;span class="k"&gt;then
        &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"User &lt;/span&gt;&lt;span class="nv"&gt;$USERNAME&lt;/span&gt;&lt;span class="s2"&gt; already exists"&lt;/span&gt; | &lt;span class="nb"&gt;tee&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;
    &lt;span class="k"&gt;else 
        &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;useradd &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="nv"&gt;$USERNAME&lt;/span&gt;
        &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$USERNAME&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;$PASSWORD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;sudo &lt;/span&gt;chpasswd
        &lt;span class="nb"&gt;sudo &lt;/span&gt;passwd &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;$USERNAME&lt;/span&gt;
        &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"User &lt;/span&gt;&lt;span class="nv"&gt;$USERNAME&lt;/span&gt;&lt;span class="s2"&gt; created succesfully"&lt;/span&gt; | &lt;span class="nb"&gt;tee&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;
    &lt;span class="k"&gt;fi
done&lt;/span&gt; &amp;lt; &lt;span class="nv"&gt;$USER_FILE&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"User Creation Completed: &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;useradd -m&lt;/code&gt;&lt;/strong&gt;: Creates the home directory automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;chpasswd&lt;/code&gt;&lt;/strong&gt;: Efficiently sets passwords from a string.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;passwd -e&lt;/code&gt;&lt;/strong&gt;: Forces the user to change their password at the first login—a crucial security step!&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. User Deletion (&lt;code&gt;del_user.sh&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;When it's time to offboard, this script makes it a one-command job.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="nv"&gt;USER_LIST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"users.txt"&lt;/span&gt;
&lt;span class="nv"&gt;LOG_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"user_deletion.log"&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"User Deletion Started: &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;

&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;read &lt;/span&gt;USERNAME
&lt;span class="k"&gt;do 
    if &lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$USERNAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &amp;amp;&amp;gt;/dev/null
    &lt;span class="k"&gt;then
        &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;userdel &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$USERNAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
        &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"User &lt;/span&gt;&lt;span class="nv"&gt;$USERNAME&lt;/span&gt;&lt;span class="s2"&gt; Deleted Successfully"&lt;/span&gt; | &lt;span class="nb"&gt;tee&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;
    &lt;span class="k"&gt;else
        &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"User &lt;/span&gt;&lt;span class="nv"&gt;$USERNAME&lt;/span&gt;&lt;span class="s2"&gt; does not exist"&lt;/span&gt; | &lt;span class="nb"&gt;tee&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;
    &lt;span class="k"&gt;fi
done&lt;/span&gt; &amp;lt; &lt;span class="nv"&gt;$USER_LIST&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"User Deletion Completed: &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;userdel -r&lt;/code&gt;&lt;/strong&gt;: Removes the user &lt;em&gt;and&lt;/em&gt; their home directory, keeping the system clean.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Handling&lt;/strong&gt;: Checks if the user exists before trying to delete them.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📈 Logging for Auditing
&lt;/h2&gt;

&lt;p&gt;Both scripts generate log files (&lt;code&gt;user_creation.log&lt;/code&gt; and &lt;code&gt;user_deletion.log&lt;/code&gt;). This is essential for tracking who was created and when, which is a standard requirement in production environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 How to Use It
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Clone the repository.&lt;/li&gt;
&lt;li&gt;Populate &lt;code&gt;users.txt&lt;/code&gt; with your desired usernames.&lt;/li&gt;
&lt;li&gt;Make the scripts executable: &lt;code&gt;chmod +x *.sh&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;./create_users.sh&lt;/code&gt; to onboard or &lt;code&gt;./del_user.sh&lt;/code&gt; to offboard.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  🔒 Security Note
&lt;/h2&gt;

&lt;p&gt;For this demonstration, the password is hardcoded. In a real-world production scenario, you should consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using an environment variable for the default password.&lt;/li&gt;
&lt;li&gt;Using a secret management tool.&lt;/li&gt;
&lt;li&gt;Prompting for a password during script execution.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🏁 Conclusion
&lt;/h2&gt;

&lt;p&gt;Bash scripting is a superpower for any Linux user. With just a few lines of code, we turned a tedious manual process into a reliable, logged, and automated workflow.&lt;/p&gt;

&lt;p&gt;How do you handle user management in your environment? Let me know in the comments!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Check out the full project on my &lt;a href="https://github.com/alanvarghese-dev/Bash_Scripting/tree/main/user_management_automation" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>bash</category>
      <category>automation</category>
      <category>linux</category>
    </item>
    <item>
      <title>Automate Your Server Maintenance: A Simple Bash Script for Log Cleanup</title>
      <dc:creator>Alan Varghese</dc:creator>
      <pubDate>Sat, 28 Feb 2026 04:34:12 +0000</pubDate>
      <link>https://forem.com/alanvarghese-dev/automate-your-server-maintenance-a-simple-bash-script-for-log-cleanup-lm4</link>
      <guid>https://forem.com/alanvarghese-dev/automate-your-server-maintenance-a-simple-bash-script-for-log-cleanup-lm4</guid>
      <description>&lt;p&gt;Tired of logs eating up your disk space? Learn how to build a simple, automated log cleanup script using Bash.&lt;/p&gt;

&lt;p&gt;Every developer or sysadmin has been there: you log into a server to investigate an issue, only to find that the disk is 100% full. More often than not, the culprit is a mountain of old log files that haven't been touched in months.&lt;/p&gt;

&lt;p&gt;While tools like &lt;code&gt;logrotate&lt;/code&gt; are the industry standard, sometimes you need a lightweight, custom solution that you can understand and deploy in seconds. &lt;/p&gt;

&lt;p&gt;In this post, I'll walk you through a simple Bash script I built to automate log cleanup and keep your storage breathing easy.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Applications generate logs—lots of them. Whether it's access logs, error logs, or debug traces, these files grow silently. Without a retention policy, they will eventually consume all available disk space, potentially crashing your applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: &lt;code&gt;auto_clean_log.sh&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;I created a script that identifies &lt;code&gt;.log&lt;/code&gt; files older than a specific number of days, deletes them, and generates a report of the actions taken.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Code
&lt;/h3&gt;

&lt;p&gt;Here is the core of the script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="c"&gt;# Configuration&lt;/span&gt;
&lt;span class="nv"&gt;LOG_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/path/to/your/logs"&lt;/span&gt;
&lt;span class="nv"&gt;RETENTION_DAYS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;7
&lt;span class="nv"&gt;REPORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"cleanup_report.log"&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"------ Log Cleanup Report &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; ---------"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$REPORT&lt;/span&gt;

&lt;span class="c"&gt;# Find files older than X days&lt;/span&gt;
&lt;span class="nv"&gt;FILES&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;find &lt;span class="nv"&gt;$LOG_DIR&lt;/span&gt; &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"*.log"&lt;/span&gt; &lt;span class="nt"&gt;-type&lt;/span&gt; f &lt;span class="nt"&gt;-mtime&lt;/span&gt; +&lt;span class="nv"&gt;$RETENTION_DAYS&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="nv"&gt;COUNT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$FILES&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"No old log files found."&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$REPORT&lt;/span&gt;
&lt;span class="k"&gt;else
    for &lt;/span&gt;file &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nv"&gt;$FILES&lt;/span&gt;
    &lt;span class="k"&gt;do 
        &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Deleting &lt;/span&gt;&lt;span class="nv"&gt;$file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$REPORT&lt;/span&gt;
        &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nv"&gt;$file&lt;/span&gt;
        &lt;span class="nv"&gt;COUNT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((&lt;/span&gt;COUNT+1&lt;span class="k"&gt;))&lt;/span&gt;
    &lt;span class="k"&gt;done
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$COUNT&lt;/span&gt;&lt;span class="s2"&gt; files deleted."&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$REPORT&lt;/span&gt;
&lt;span class="k"&gt;fi
&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"---------------------------------------------------"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$REPORT&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  How It Works
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;find $LOG_DIR -name "*.log" -type f -mtime +$RETENTION_DAYS&lt;/code&gt;&lt;/strong&gt;: This is the heart of the script.

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;-name "*.log"&lt;/code&gt;: Targets only log files.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;-type f&lt;/code&gt;: Ensures we only look at files, not directories.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;-mtime +$RETENTION_DAYS&lt;/code&gt;: Filters for files modified more than $X$ days ago.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;The Loop&lt;/strong&gt;: It iterates through the found files, deletes them using &lt;code&gt;rm -f&lt;/code&gt;, and tracks the count.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Reporting&lt;/strong&gt;: Every action is appended to &lt;code&gt;cleanup_report.log&lt;/code&gt;, giving you a clear audit trail of what happened and when.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setup &amp;amp; Automation
&lt;/h2&gt;

&lt;p&gt;To make this truly "set it and forget it," follow these steps:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Make it Executable
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x auto_clean_log.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Automate with Cron
&lt;/h3&gt;

&lt;p&gt;You shouldn't have to run this manually. Add it to your &lt;code&gt;crontab&lt;/code&gt; to run every night at midnight:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;crontab &lt;span class="nt"&gt;-e&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add this line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;0 0 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /path/to/your/script/auto_clean_log.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: Prevents disk I/O issues related to full partitions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost&lt;/strong&gt;: Reduces storage costs on cloud providers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security/Compliance&lt;/strong&gt;: Helps adhere to data retention policies by ensuring old logs aren't kept indefinitely.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Automation doesn't always require complex tools or heavy frameworks. Sometimes, a 20-line Bash script is exactly what you need to solve a recurring headache.&lt;/p&gt;

&lt;p&gt;Feel free to check out the full project on my &lt;a href="https://github.com/alanvarghese-dev/Bash_Scripting/tree/main/Automatic_log_Cleanup" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;How do you handle log management in your environment? Let me know in the comments!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>bash</category>
      <category>automation</category>
      <category>systemadmin</category>
    </item>
    <item>
      <title>The Evolution of a Bash Tool: Comparing 3 Levels of Log Analysers</title>
      <dc:creator>Alan Varghese</dc:creator>
      <pubDate>Fri, 20 Feb 2026 17:44:11 +0000</pubDate>
      <link>https://forem.com/alanvarghese-dev/the-evolution-of-a-bash-tool-comparing-3-levels-of-log-analysers-4l7p</link>
      <guid>https://forem.com/alanvarghese-dev/the-evolution-of-a-bash-tool-comparing-3-levels-of-log-analysers-4l7p</guid>
      <description>&lt;p&gt;Log analysis is a bread and butter task for any DevOps engineer or SysAdmin. While there are massive enterprise tools for this, sometimes a quick Bash script is all you need.&lt;/p&gt;

&lt;p&gt;In this post, I’ll be comparing three versions of a Bash based Log Analyser I've been working on, showing the transition from a "beginner" script to a "professional grade" CLI tool.&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 The Three Contenders
&lt;/h2&gt;

&lt;p&gt;We have three distinct projects in the repository:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;log_analyser_simple&lt;/code&gt;&lt;/strong&gt;: The "Keep It Simple, Stupid" (KISS) approach.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;log_analyser_scrpt&lt;/code&gt;&lt;/strong&gt;: The "Professional" upgrade with flags and colors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;log_analyser_adv&lt;/code&gt;&lt;/strong&gt;: The "Advanced" version with robust validation and modularity.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  1. &lt;code&gt;log_analyser_simple&lt;/code&gt;: The Bare Essentials
&lt;/h2&gt;

&lt;p&gt;This version is perfect for anyone just starting with Bash. It focuses purely on the logic of parsing a file without the "bells and whistles" of a CLI.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Logic:&lt;/strong&gt; Uses a simple &lt;code&gt;for&lt;/code&gt; loop over hardcoded levels (&lt;code&gt;ERROR&lt;/code&gt;, &lt;code&gt;WARNING&lt;/code&gt;, &lt;code&gt;INFO&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Argument Handling:&lt;/strong&gt; Uses positional parameters (&lt;code&gt;$1&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Key Feature:&lt;/strong&gt; Identifies top unique messages using a classic &lt;code&gt;grep | awk | sort | uniq | head&lt;/code&gt; pipeline.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Learning the basics of text processing in Unix.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. &lt;code&gt;log_analyser_scrpt&lt;/code&gt;: The "Professional" SetUp
&lt;/h2&gt;

&lt;p&gt;This is where the script starts feeling like a real tool. It moves away from positional arguments and introduces a much better user experience.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;CLI Experience:&lt;/strong&gt; Uses &lt;code&gt;getopts&lt;/code&gt; to handle flags like &lt;code&gt;-f&lt;/code&gt; (file), &lt;code&gt;-s&lt;/code&gt; (search), and &lt;code&gt;-o&lt;/code&gt; (output).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Visuals:&lt;/strong&gt; Adds ANSI color coding (&lt;code&gt;RED&lt;/code&gt;, &lt;code&gt;GREEN&lt;/code&gt;, &lt;code&gt;BLUE&lt;/code&gt;) to make the terminal output readable at a glance.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Analytics:&lt;/strong&gt; Introduces &lt;strong&gt;User Activity Tracking&lt;/strong&gt;, extracting usernames from logs to see who the "noisiest" users are.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Reporting:&lt;/strong&gt; Uses &lt;code&gt;tee&lt;/code&gt; to show results on screen while simultaneously saving them to a file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Daily use in a development environment where you need quick, readable reports.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. &lt;code&gt;log_analyser_adv&lt;/code&gt;: The "Production Ready" Tool
&lt;/h2&gt;

&lt;p&gt;The advanced version takes the professional version and hardens it. It’s built for robustness and edge case handling.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Enhanced Scope:&lt;/strong&gt; Adds &lt;code&gt;DEBUG&lt;/code&gt; and &lt;code&gt;FATAL&lt;/code&gt; log levels.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Validation:&lt;/strong&gt; Includes strict validation for inputs. For example, it checks if the provided log level via the &lt;code&gt;-l&lt;/code&gt; flag is actually valid before running.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Modularity:&lt;/strong&gt; Uses functions to organize logic, making the code much easier to maintain.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Helpful:&lt;/strong&gt; Includes a proper help message (&lt;code&gt;-h&lt;/code&gt;)—a must-have for any shared tool.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Sharing with a team or using in automated cron jobs where validation is critical.&lt;/p&gt;




&lt;h2&gt;
  
  
  📊 Feature Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Simple&lt;/th&gt;
&lt;th&gt;Professional&lt;/th&gt;
&lt;th&gt;Advanced&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Argument Parsing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Positional&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;getopts&lt;/code&gt; (Flags)&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;getopts&lt;/code&gt; + Validation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Colors&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Search Function&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Report Export&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Log Levels&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;User Tracking&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Help Menu&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  💡 Which one should you use?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use &lt;code&gt;simple&lt;/code&gt;&lt;/strong&gt; if you are learning Bash and want to understand how &lt;code&gt;awk&lt;/code&gt; and &lt;code&gt;sed&lt;/code&gt; work.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use &lt;code&gt;professional&lt;/code&gt;&lt;/strong&gt; if you want a tool that "just works" and looks good in your terminal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use &lt;code&gt;advanced&lt;/code&gt;&lt;/strong&gt; if you need a reliable, validated tool that can handle various log levels and provide a clean help interface.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The journey from a simple loop to a modular, flag driven CLI tool is a great way to master Bash scripting. It shows that even small scripts can be evolved into powerful utilities by focusing on user experience, error handling, and modularity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's your favorite Bash trick for log parsing? Let me know in the comments!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>bash</category>
      <category>devops</category>
      <category>automation</category>
      <category>scripting</category>
    </item>
    <item>
      <title>Building a Robust Log Analyzer with Bash: From Messy Logs to Actionable Insights</title>
      <dc:creator>Alan Varghese</dc:creator>
      <pubDate>Fri, 20 Feb 2026 16:55:59 +0000</pubDate>
      <link>https://forem.com/alanvarghese-dev/building-a-robust-log-analyzer-with-bash-from-messy-logs-to-actionable-insights-438a</link>
      <guid>https://forem.com/alanvarghese-dev/building-a-robust-log-analyzer-with-bash-from-messy-logs-to-actionable-insights-438a</guid>
      <description>&lt;p&gt;As developers and DevOps engineers, we often find ourselves staring at massive log files, trying to pinpoint that one elusive error or understand user behavior. While there are enterprise-grade tools like ELK or Datadog, sometimes you just need a lightweight, fast, and portable solution right in your terminal.&lt;/p&gt;

&lt;p&gt;That's why I built the &lt;strong&gt;Professional Bash Log Analyzer&lt;/strong&gt;. In this post, I'll walk you through how it works and how you can use it to make sense of your logs in seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 The Problem
&lt;/h2&gt;

&lt;p&gt;Traditional &lt;code&gt;grep&lt;/code&gt; and &lt;code&gt;awk&lt;/code&gt; commands are powerful, but chaining them together every time you want a summary can be tedious. I wanted a tool that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Provides a quick summary of log levels.&lt;/li&gt;
&lt;li&gt;Identifies the most frequent errors.&lt;/li&gt;
&lt;li&gt;Tracks the most active users.&lt;/li&gt;
&lt;li&gt;Filters by specific levels or keywords.&lt;/li&gt;
&lt;li&gt;Works out of the box on any Linux/macOS system.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  ✨ Key Features
&lt;/h2&gt;

&lt;p&gt;My script, &lt;code&gt;log_analyser.sh&lt;/code&gt;, comes packed with features designed for real world use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Colorized CLI Output:&lt;/strong&gt; Highlighting errors in red and info in green makes reports instantly readable.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Log Level Filtering:&lt;/strong&gt; Support for &lt;code&gt;INFO&lt;/code&gt;, &lt;code&gt;WARNING&lt;/code&gt;, &lt;code&gt;ERROR&lt;/code&gt;, &lt;code&gt;DEBUG&lt;/code&gt;, and &lt;code&gt;FATAL&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Keyword Search:&lt;/strong&gt; Quickly find specific entries (e.g., "Database" or "Timeout").&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Automated Summaries:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  Total entry count.&lt;/li&gt;
&lt;li&gt;  Breakdown by log level.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Top 5 most frequent error messages.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Top 5 most active users.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Report Export:&lt;/strong&gt; Easily save your analysis to a text file for sharing or auditing.&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Professional CLI Experience:&lt;/strong&gt; Built using &lt;code&gt;getopts&lt;/code&gt; for robust argument parsing.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  🛠 How It Works
&lt;/h2&gt;

&lt;p&gt;The script follows a clean, modular structure. Here's a look at how it handles the core analysis logic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;analyze&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;BLUE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;--- Analysis Report for: &lt;/span&gt;&lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;&lt;span class="s2"&gt; ---&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NC&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Generated on: &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    &lt;span class="c"&gt;# ... Count log levels ...&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RED&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;ERROR:   &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"ERROR"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;wc&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NC&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    &lt;span class="c"&gt;# Extract Top 5 Error Messages&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"
&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RED&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;--- Top 5 Error Messages ---&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NC&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"ERROR"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{ $1=$2=$3=""; print $0 }'&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; | &lt;span class="nb"&gt;uniq&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; &lt;span class="nt"&gt;-nr&lt;/span&gt; | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 5

    &lt;span class="c"&gt;# Identify Top 5 Active Users&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"
&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GREEN&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;--- Top 5 Active Users ---&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NC&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"User"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{ print $5 }'&lt;/span&gt; | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"'"&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; | &lt;span class="nb"&gt;uniq&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; &lt;span class="nt"&gt;-nr&lt;/span&gt; | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 5
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Smart Parsing with &lt;code&gt;getopts&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;I used &lt;code&gt;getopts&lt;/code&gt; to ensure the tool feels like a standard Linux utility. You can mix and match flags effortlessly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./log_analyser.sh &lt;span class="nt"&gt;-f&lt;/span&gt; sample.log &lt;span class="nt"&gt;-l&lt;/span&gt; ERROR &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="s2"&gt;"Database"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; report.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  📖 Usage Examples
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Basic Analysis
&lt;/h3&gt;

&lt;p&gt;Get a quick overview of your log file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./log_analyser.sh &lt;span class="nt"&gt;-f&lt;/span&gt; server.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Filter for Critical Issues
&lt;/h3&gt;

&lt;p&gt;Focus only on ERROR logs and search for specific failure points:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./log_analyser.sh &lt;span class="nt"&gt;-f&lt;/span&gt; server.log &lt;span class="nt"&gt;-l&lt;/span&gt; ERROR &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="s2"&gt;"Connection"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Generate a Permanent Report
&lt;/h3&gt;

&lt;p&gt;Save the output to a file while still seeing it in your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./log_analyser.sh &lt;span class="nt"&gt;-f&lt;/span&gt; server.log &lt;span class="nt"&gt;-o&lt;/span&gt; daily_report.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🧠 Lessons Learned
&lt;/h2&gt;

&lt;p&gt;Developing this tool reinforced a few core Bash principles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Modularity:&lt;/strong&gt; Using functions like &lt;code&gt;analyze()&lt;/code&gt; and &lt;code&gt;usage()&lt;/code&gt; makes the script much easier to maintain.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Validation:&lt;/strong&gt; Always validate user input. Checking if the file exists and if the log level is valid prevents cryptic shell errors later.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;UX Matters:&lt;/strong&gt; Adding ANSI color codes might seem small, but it significantly improves the user experience when scanning through data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🎁 Wrap Up
&lt;/h2&gt;

&lt;p&gt;This Log Analyzer is open-source and ready for you to tweak! Whether you're debugging a microservice or monitoring a legacy server, I hope this tool saves you some "grep-ping" time.&lt;/p&gt;

&lt;p&gt;Check out the code and let me know what features you'd add next!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Follow me for more DevOps and automation tips!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>bash</category>
      <category>automation</category>
      <category>scripting</category>
    </item>
    <item>
      <title>Stop Reading Logs Manually: Build a Professional Log Analyzer in Bash</title>
      <dc:creator>Alan Varghese</dc:creator>
      <pubDate>Fri, 20 Feb 2026 16:53:20 +0000</pubDate>
      <link>https://forem.com/alanvarghese-dev/stop-reading-logs-manually-build-a-professional-log-analyzer-in-bash-5ee3</link>
      <guid>https://forem.com/alanvarghese-dev/stop-reading-logs-manually-build-a-professional-log-analyzer-in-bash-5ee3</guid>
      <description>&lt;p&gt;We've all been there staring at a massive log file, trying to figure out why a service is failing or which user is causing the most errors. Manually searching through thousands of lines using &lt;code&gt;less&lt;/code&gt; or &lt;code&gt;grep&lt;/code&gt; is tedious and error prone.&lt;/p&gt;

&lt;p&gt;In this post, I'll show you how I built a &lt;strong&gt;Professional Log Analyzer&lt;/strong&gt; using Bash. It's lightweight, color coded, and gives you instant insights into your application's health.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 The Problem
&lt;/h2&gt;

&lt;p&gt;Modern applications generate a lot of data. When things go wrong, you need answers fast:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How many errors happened in the last hour?&lt;/li&gt;
&lt;li&gt;Which error is the most frequent?&lt;/li&gt;
&lt;li&gt;Which users are most active (or causing the most trouble)?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🛠️ The Solution: &lt;code&gt;log_analyzer.sh&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;I developed a script that transforms messy log data into a structured, readable report. Here are the core features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automated Summaries:&lt;/strong&gt; Instantly counts &lt;code&gt;INFO&lt;/code&gt;, &lt;code&gt;WARNING&lt;/code&gt;, and &lt;code&gt;ERROR&lt;/code&gt; levels.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Ranking:&lt;/strong&gt; Shows the Top 5 most frequent error messages.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User Activity Tracking:&lt;/strong&gt; Identifies the Top 5 most active users.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Keyword Search:&lt;/strong&gt; Quickly filter logs for specific issues (e.g., "Database" or "Timeout").&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Professional Output:&lt;/strong&gt; Uses ANSI color codes for readability and supports saving reports to a file.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  💻 How It Works
&lt;/h2&gt;

&lt;p&gt;The script uses standard Unix utilities (&lt;code&gt;awk&lt;/code&gt;, &lt;code&gt;grep&lt;/code&gt;, &lt;code&gt;sort&lt;/code&gt;, &lt;code&gt;uniq&lt;/code&gt;) and &lt;code&gt;getopts&lt;/code&gt; for a professional CLI experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Parsing Arguments with &lt;code&gt;getopts&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;I used &lt;code&gt;getopts&lt;/code&gt; to handle command line flags, making the script feel like a real tool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;getopts&lt;/span&gt; &lt;span class="s2"&gt;"f:s:o:"&lt;/span&gt; opt&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    case&lt;/span&gt; &lt;span class="nv"&gt;$opt&lt;/span&gt; &lt;span class="k"&gt;in
        &lt;/span&gt;f&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;LOG_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$OPTARG&lt;/span&gt; &lt;span class="p"&gt;;;&lt;/span&gt;
        s&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;SEARCH_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$OPTARG&lt;/span&gt; &lt;span class="p"&gt;;;&lt;/span&gt;
        o&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;OUTPUT_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$OPTARG&lt;/span&gt; &lt;span class="p"&gt;;;&lt;/span&gt;
        &lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; usage &lt;span class="p"&gt;;;&lt;/span&gt;
    &lt;span class="k"&gt;esac&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Analysis Logic
&lt;/h3&gt;

&lt;p&gt;The heart of the script lies in combining pipe lined commands. For example, to find the most active users:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"User"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{ print $5 }'&lt;/span&gt; | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"'"&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; | &lt;span class="nb"&gt;uniq&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; &lt;span class="nt"&gt;-nr&lt;/span&gt; | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This single line searches for user entries, extracts the username, cleans it up, counts occurrences, sorts them, and grabs the top 5.&lt;/p&gt;

&lt;h2&gt;
  
  
  📊 Sample Output
&lt;/h2&gt;

&lt;p&gt;When you run the script, you get a clean, colorized report:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplaceholder.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplaceholder.png" alt="Log Analyzer Output" width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--- Analysis Report for: sample.log ---
Generated on: Fri Feb 20 14:30:00 UTC 2026
Total log entries: 1250

--- Log Level Counts ---
INFO:    850
WARNING: 300
ERROR:   100

--- Top 5 Error Messages ---
  45 Connection timeout to database
  20 Disk space low
  15 Invalid API key
  10 Unauthorized access attempt
   5 Cache sync failed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🧠 What I Learned
&lt;/h2&gt;

&lt;p&gt;Building this tool reinforced several key concepts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The Power of Pipes:&lt;/strong&gt; Unix pipes are incredibly efficient for processing text data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CLI UX Matters:&lt;/strong&gt; Adding colors and clear flag based arguments makes a script much more usable for other developers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regex is your Friend:&lt;/strong&gt; Using &lt;code&gt;grep&lt;/code&gt; and &lt;code&gt;awk&lt;/code&gt; effectively can replace complex Python or Node.js scripts for simple log processing.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  📂 Try it Yourself!
&lt;/h2&gt;

&lt;p&gt;If you want to automate your own log analysis, check out the project structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;log_analyzer.sh&lt;/code&gt;: The main engine.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sample.log&lt;/code&gt;: For testing your regex and logic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Question for you:&lt;/strong&gt; How do you currently handle log analysis in your workflow? Do you use a full ELK stack, or do you have some "secret sauce" Bash scripts of your own?&lt;/p&gt;

&lt;p&gt;Let's discuss in the comments! 👇&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you found this helpful, feel free to give it a ❤️ and follow for more DevOps and scripting tips!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>automation</category>
      <category>bash</category>
      <category>scripting</category>
    </item>
    <item>
      <title>Automate Your Log Analysis with This Simple Bash Script</title>
      <dc:creator>Alan Varghese</dc:creator>
      <pubDate>Fri, 20 Feb 2026 16:50:22 +0000</pubDate>
      <link>https://forem.com/alanvarghese-dev/automate-your-log-analysis-with-this-simple-bash-script-5857</link>
      <guid>https://forem.com/alanvarghese-dev/automate-your-log-analysis-with-this-simple-bash-script-5857</guid>
      <description>&lt;p&gt;If you've ever spent too much time staring at a wall of text in a log file, trying to figure out why your application is crashing, this post is for you.&lt;/p&gt;

&lt;p&gt;While there are powerful tools like ELK or Datadog for enterprise scale logging, sometimes you just need something quick, local, and "no nonsense" to parse a log file on your machine or a remote server.&lt;/p&gt;

&lt;p&gt;Today, I'm sharing a simple &lt;strong&gt;Log Analyser&lt;/strong&gt; script I built in Bash that categorizes logs and surfaces the most frequent issues automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Scanning logs manually is tedious and error prone. You might &lt;code&gt;grep "ERROR" file.log&lt;/code&gt; and then realize you have 500 lines of the same database connection error, hiding a single "File not found" error that is the &lt;em&gt;actual&lt;/em&gt; root cause.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: &lt;code&gt;log_analyser.sh&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;I created a lightweight script that does three things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Counts&lt;/strong&gt; total occurrences of INFO, WARNING, and ERROR levels.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Aggregates&lt;/strong&gt; unique messages so you can see which specific error is occurring most often.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Displays&lt;/strong&gt; a clean summary.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Code
&lt;/h3&gt;

&lt;p&gt;Here is the heart of the script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="c"&gt;# ... basic validation logic ...&lt;/span&gt;

&lt;span class="nv"&gt;LOG_LEVELS&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;&lt;span class="s2"&gt;"ERROR"&lt;/span&gt; &lt;span class="s2"&gt;"WARNING"&lt;/span&gt; &lt;span class="s2"&gt;"INFO"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;for &lt;/span&gt;LEVEL &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;LOG_LEVELS&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;&lt;span class="nv"&gt;COUNT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-ic&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LEVEL&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"
[&lt;/span&gt;&lt;span class="nv"&gt;$LEVEL&lt;/span&gt;&lt;span class="s2"&gt;] Total occurrences: &lt;/span&gt;&lt;span class="nv"&gt;$COUNT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$COUNT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-gt&lt;/span&gt; 0 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Top unique &lt;/span&gt;&lt;span class="nv"&gt;$LEVEL&lt;/span&gt;&lt;span class="s2"&gt; messages:"&lt;/span&gt;
    &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LEVEL&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{$1=$2=$3=""; print $0}'&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;'s/^[    ]*//'&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; | &lt;span class="nb"&gt;uniq&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; &lt;span class="nt"&gt;-rn&lt;/span&gt; | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 5
  &lt;span class="k"&gt;fi
done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  How It Works: The "Power Pipeline"
&lt;/h3&gt;

&lt;p&gt;The most interesting part of this script is the command pipeline used to extract unique messages:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;grep -i "$LEVEL" "$LOG_FILE" | awk '{$1=$2=$3=""; print $0}' | sed 's/^[  ]*//' | sort | uniq -c | sort -rn | head -n 5&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let's break it down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;grep -i "$LEVEL"&lt;/code&gt;: Finds the log level (case-insensitive).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;awk '{$1=$2=$3=""; print $0}'&lt;/code&gt;: This is a neat trick! It clears the first three fields (usually Date, Time, and Level) so we only look at the actual message content.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sed 's/^[   ]*//'&lt;/code&gt;: Trims the leading whitespace left behind by &lt;code&gt;awk&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sort | uniq -c&lt;/code&gt;: Sorts the messages and then counts how many times each unique message appears.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sort -rn&lt;/code&gt;: Sorts the results numerically in reverse order (highest count first).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;head -n 5&lt;/code&gt;: Only shows us the top 5 most frequent messages.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to Use It
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Clone the script&lt;/strong&gt; (or copy it from above).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Make it executable&lt;/strong&gt;: &lt;code&gt;chmod +x log_analyser.sh&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run it&lt;/strong&gt; against any log file: &lt;code&gt;./log_analyser.sh sample.log&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Example Output
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--- Log Analysis Summary for: sample.log ---

[ERROR] Total occurrences: 4
Top unique ERROR messages:
   3 Database connection failed.
   1 File not found: /var/www/html/index.php

[WARNING] Total occurrences: 2
Top unique WARNING messages:
   1 Memory usage high.
   1 Disk usage at 85%.

...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Bash is incredibly powerful for these kinds of "glue" tasks. By combining a few standard Unix tools, we've created a tool that saves minutes of manual work every time we debug a service.&lt;/p&gt;

&lt;p&gt;What are your favorite "one liner" Bash tricks for log analysis? Let me know in the comments!&lt;/p&gt;




&lt;p&gt;This project was a great exercise in learning Bash best practices and command line data processing. You can find the full project on my &lt;a href="https://github.com/alanvarghese-dev/Bash_Scripting" rel="noopener noreferrer"&gt;https://github.com/alanvarghese-dev/Bash_Scripting&lt;/a&gt; &lt;/p&gt;

</description>
      <category>bash</category>
      <category>scripting</category>
      <category>automation</category>
      <category>devops</category>
    </item>
    <item>
      <title>"Bash Backup Battle: Minimalist vs. Feature-Rich Scripts"</title>
      <dc:creator>Alan Varghese</dc:creator>
      <pubDate>Wed, 18 Feb 2026 10:22:23 +0000</pubDate>
      <link>https://forem.com/alanvarghese-dev/bash-backup-battle-minimalist-vs-feature-rich-scripts-450p</link>
      <guid>https://forem.com/alanvarghese-dev/bash-backup-battle-minimalist-vs-feature-rich-scripts-450p</guid>
      <description>&lt;p&gt;In the world of automation, there's rarely a "one size fits all" solution. Sometimes you need a quick script to throw into a cron job, and other times you need a robust, configuration-driven tool that can handle remote transfers and detailed logging.&lt;/p&gt;

&lt;p&gt;In this post, I’m comparing two Bash backup projects I've been working on. Both get the job done, but they take very different paths to get there.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Contenders
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. The "DevOps-Ready" Script (&lt;code&gt;file_backup_script&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;This project is built for complexity and reliability. It treats configuration as code by separating the logic from the settings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Configuration File (&lt;code&gt;backup.conf&lt;/code&gt;)&lt;/strong&gt;: No need to touch the code to change paths or retention days.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dry Run Mode (&lt;code&gt;-d&lt;/code&gt;)&lt;/strong&gt;: A safety-first approach that tells you exactly what &lt;em&gt;would&lt;/em&gt; happen without moving a single byte.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Remote Transfer&lt;/strong&gt;: Built-in support for &lt;code&gt;scp&lt;/code&gt; to push backups to a remote server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advanced Logging&lt;/strong&gt;: Every action, warning, and error is timestamped and recorded in a dedicated log file.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. The "Minimalist" Script (&lt;code&gt;File_backup_scripts&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;This project focuses on speed and simplicity. It’s the kind of script you can call on the fly from the terminal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Argument-Driven&lt;/strong&gt;: Pass your source and destination directly: &lt;code&gt;./backup.sh /src /dest&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exclusion Patterns&lt;/strong&gt;: Quickly skip &lt;code&gt;.git&lt;/code&gt; folders or log files using glob patterns.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Cleanup&lt;/strong&gt;: A hardcoded 30-day retention policy ensures your disk doesn't fill up.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low Overhead&lt;/strong&gt;: No config files to manage; just the script and your directories.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔍 Feature Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;DevOps-Ready&lt;/th&gt;
&lt;th&gt;Minimalist&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Input Method&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Configuration File&lt;/td&gt;
&lt;td&gt;CLI Arguments&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Remote Support&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes (&lt;code&gt;scp&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;No (Local only)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Testing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Dry Run Flag&lt;/td&gt;
&lt;td&gt;Manual&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Multiple Sources&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;td&gt;Single Source&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Logging&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Detailed File Logs&lt;/td&gt;
&lt;td&gt;Standard Output&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Complexity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Which One Should You Use?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Use the DevOps-Ready Script if...
&lt;/h3&gt;

&lt;p&gt;You are managing a server or a production environment. The &lt;strong&gt;Dry Run&lt;/strong&gt; feature alone makes it much safer to deploy, and the &lt;strong&gt;Remote Transfer&lt;/strong&gt; capability ensures your data is safe even if the local disk fails. It’s perfect for those who want to "set it and forget it."&lt;/p&gt;

&lt;h3&gt;
  
  
  Use the Minimalist Script if...
&lt;/h3&gt;

&lt;p&gt;You need to grab a quick backup of a project folder before making major changes. It’s lightweight, requires zero setup, and handles the exclusion of messy folders like &lt;code&gt;node_modules&lt;/code&gt; or &lt;code&gt;.git&lt;/code&gt; with ease.&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 Lessons Learned
&lt;/h2&gt;

&lt;p&gt;Writing both of these taught me that &lt;strong&gt;context is king&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;When I was building the DevOps version, I focused heavily on &lt;strong&gt;error handling&lt;/strong&gt; and &lt;strong&gt;idempotency&lt;/strong&gt;. I wanted to make sure that if a remote transfer failed, the script wouldn't just crash—it would log the error and move on.&lt;/p&gt;

&lt;p&gt;With the minimalist version, the focus was on &lt;strong&gt;ergonomics&lt;/strong&gt;. I wanted the shortest possible command to yield a reliable result.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Both scripts are available in my repository! Whether you need a robust automation tool or a quick-and-dirty backup utility, there's a Bash solution here for you.&lt;/p&gt;

&lt;p&gt;Check out the code and let me know: &lt;strong&gt;Do you prefer config files or CLI arguments for your scripts?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/alanvarghese-dev/Bash_Scripting" rel="noopener noreferrer"&gt;https://github.com/alanvarghese-dev/Bash_Scripting&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/alanvarghese-dev" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/alanvarghese-dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>bash</category>
      <category>automation</category>
      <category>scripting</category>
    </item>
    <item>
      <title>Automate Your Backups Like a Pro: A Robust Bash Script for DevOps Enthusiasts.</title>
      <dc:creator>Alan Varghese</dc:creator>
      <pubDate>Wed, 18 Feb 2026 10:17:33 +0000</pubDate>
      <link>https://forem.com/alanvarghese-dev/automate-your-backups-like-a-pro-a-robust-bash-script-for-devops-enthusiasts-57jm</link>
      <guid>https://forem.com/alanvarghese-dev/automate-your-backups-like-a-pro-a-robust-bash-script-for-devops-enthusiasts-57jm</guid>
      <description>&lt;p&gt;In the world of DevOps, there's a golden rule: &lt;strong&gt;If it’s not backed up, it doesn’t exist.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;While there are many enterprise-grade backup solutions available, sometimes you need something lightweight, highly customizable, and easy to integrate into your existing workflows. That's where a well-crafted Bash script comes in.&lt;/p&gt;

&lt;p&gt;In this post, I'll walk you through a &lt;strong&gt;Robust File Backup Script&lt;/strong&gt; I built that handles compression, remote transfers, retention policies, and detailed logging—all while following core DevOps principles.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Features
&lt;/h2&gt;

&lt;p&gt;Our backup script isn't just a simple &lt;code&gt;cp&lt;/code&gt; command. It's designed to be production-ready with features like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Smart Compression&lt;/strong&gt;: Uses &lt;code&gt;tar&lt;/code&gt; and &lt;code&gt;gzip&lt;/code&gt; to minimize storage space.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Configuration Decoupling&lt;/strong&gt;: All settings live in a separate &lt;code&gt;backup.conf&lt;/code&gt; file (Infrastructure as Code &lt;em&gt;lite&lt;/em&gt;).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Dry Run Mode&lt;/strong&gt;: A &lt;code&gt;-d&lt;/code&gt; flag to see exactly what &lt;em&gt;would&lt;/em&gt; happen without actually doing it.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Automatic Retention&lt;/strong&gt;: Keeps your disk clean by deleting local backups older than &lt;code&gt;N&lt;/code&gt; days.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Secure Remote Transfer&lt;/strong&gt;: Optionally sends your archives to a remote server via &lt;code&gt;scp&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Comprehensive Logging&lt;/strong&gt;: Every action, warning, and error is timestamped and logged for auditing.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  🛠 The Technical Breakdown
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Separation of Concerns
&lt;/h3&gt;

&lt;p&gt;We keep our logic (&lt;code&gt;backup.sh&lt;/code&gt;) separate from our settings (&lt;code&gt;backup.conf&lt;/code&gt;). This makes the script portable across different environments (Dev, Stage, Prod) without modification.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;backup.conf&lt;/code&gt; example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;SOURCE_PATHS&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;
    &lt;span class="s2"&gt;"/var/www/html"&lt;/span&gt;
    &lt;span class="s2"&gt;"/etc/nginx/conf.d"&lt;/span&gt;
&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;BACKUP_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"./backups"&lt;/span&gt;
&lt;span class="nv"&gt;RETENTION_DAYS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;7
&lt;span class="nv"&gt;ENABLE_REMOTE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"true"&lt;/span&gt;
&lt;span class="nv"&gt;REMOTE_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"backup-server.local"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Flexible Argument Parsing
&lt;/h3&gt;

&lt;p&gt;Using &lt;code&gt;getopts&lt;/code&gt;, the script feels like a professional CLI tool. You can specify custom config files or trigger a dry run easily.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;getopts&lt;/span&gt; &lt;span class="s2"&gt;":c:dh"&lt;/span&gt; opt&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    case&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;opt&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="k"&gt;in
        &lt;/span&gt;c &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;CONFIG_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$OPTARG&lt;/span&gt; &lt;span class="p"&gt;;;&lt;/span&gt;
        d &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;DRY_RUN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="p"&gt;;;&lt;/span&gt;
        h &lt;span class="p"&gt;)&lt;/span&gt; usage &lt;span class="p"&gt;;;&lt;/span&gt;
    &lt;span class="k"&gt;esac&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. The Power of &lt;code&gt;find&lt;/code&gt; for Retention
&lt;/h3&gt;

&lt;p&gt;Managing disk space is crucial. We use &lt;code&gt;find&lt;/code&gt; with the &lt;code&gt;-mtime&lt;/code&gt; flag to identify and remove old archives automatically.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;find &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BACKUP_DIR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-type&lt;/span&gt; f &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"backup_*.tar.gz"&lt;/span&gt; &lt;span class="nt"&gt;-mtime&lt;/span&gt; +&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$RETENTION_DAYS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-exec&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="o"&gt;{}&lt;/span&gt; &lt;span class="se"&gt;\;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Secure Remote Offloading
&lt;/h3&gt;

&lt;p&gt;A backup on the same disk isn't a true backup. Our script supports secure transfer to a remote host:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;scp &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BACKUP_DIR&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$BACKUP_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REMOTE_USER&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="nv"&gt;$REMOTE_HOST&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;$REMOTE_PATH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  📖 Lessons Learned &amp;amp; DevOps Principles
&lt;/h2&gt;

&lt;p&gt;Building this project reinforced several key concepts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Automation over Manual Work&lt;/strong&gt;: Human error is the #1 cause of data loss. Automating the backup removes the "I forgot" factor.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Idempotency &amp;amp; Resilience&lt;/strong&gt;: The script checks if directories exist and if source paths are valid before starting.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Visibility&lt;/strong&gt;: "In DevOps, if it wasn't logged, it didn't happen." Detailed logs are essential for debugging scheduled cron jobs.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Safety First&lt;/strong&gt;: The &lt;strong&gt;Dry Run&lt;/strong&gt; mode is a lifesaver when testing new configurations on a production server.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  How to Use It
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Clone the Repo&lt;/strong&gt;: [Link to your repo here]&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Configure&lt;/strong&gt;: Edit &lt;code&gt;backup.conf&lt;/code&gt; with your paths.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Make it Executable&lt;/strong&gt;: &lt;code&gt;chmod +x backup.sh&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Test it&lt;/strong&gt;: &lt;code&gt;./backup.sh -d&lt;/code&gt; (Dry run)&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Run it&lt;/strong&gt;: &lt;code&gt;./backup.sh&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Schedule it&lt;/strong&gt;: Add it to your &lt;code&gt;crontab&lt;/code&gt; to run nightly!
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;0 2 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /path/to/backup.sh &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /path/to/logs/cron.log 2&amp;gt;&amp;amp;1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Future Enhancements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Adding &lt;strong&gt;AWS S3&lt;/strong&gt; support using the AWS CLI.&lt;/li&gt;
&lt;li&gt;Implementing &lt;strong&gt;Slack/Email notifications&lt;/strong&gt; on failure.&lt;/li&gt;
&lt;li&gt;Adding &lt;strong&gt;Checksum verification&lt;/strong&gt; to ensure data integrity after transfer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What does your backup strategy look like? Do you prefer simple scripts or complex tools? Let’s discuss in the comments! 👇&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/alanvarghese-dev/Bash_Scripting" rel="noopener noreferrer"&gt;https://github.com/alanvarghese-dev/Bash_Scripting&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/alanvarghese-dev" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/alanvarghese-dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>bash</category>
      <category>scripting</category>
      <category>automation</category>
    </item>
  </channel>
</rss>
