<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Muhammed Safvan</title>
    <description>The latest articles on Forem by Muhammed Safvan (@safvantsy).</description>
    <link>https://forem.com/safvantsy</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/safvantsy"/>
    <language>en</language>
    <item>
      <title>How I Used Notion MCP to Screen 5,000+ Stocks and Write AI Research Reports</title>
      <dc:creator>Muhammed Safvan</dc:creator>
      <pubDate>Thu, 19 Mar 2026 03:11:35 +0000</pubDate>
      <link>https://forem.com/safvantsy/how-i-used-notion-mcp-to-screen-5000-stocks-and-write-ai-research-reports-3o6c</link>
      <guid>https://forem.com/safvantsy/how-i-used-notion-mcp-to-screen-5000-stocks-and-write-ai-research-reports-3o6c</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/notion-2026-03-04"&gt;Notion MCP Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;StockPulse&lt;/strong&gt; is an AI-powered Indian Stock Intelligence platform built entirely on Notion. &lt;/p&gt;

&lt;p&gt;It solves the problem of scattered financial data by acting as a centralized, human-in-the-loop research hub. First, a Python data pipeline fetches daily price and delivery data from the NSE and BSE (Indian stock exchanges). It then runs 5,000+ stocks through a rigorous, battle-tested 12-condition fundamental screener. &lt;/p&gt;

&lt;p&gt;The magic happens when the data enters Notion. By using the Model Context Protocol (MCP), StockPulse allows AI assistants (like Claude) to seamlessly read the screened data, identify anomalies, analyze fundamentals, and write comprehensive research reports directly back into Notion databases. &lt;/p&gt;

&lt;h2&gt;
  
  
  Video Demo
&lt;/h2&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/nylSRN-xNYc"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Notion page: &lt;a href="https://www.notion.so/StockPulse-Home-Page-3221879420d180c785d1eb25e8956ce4" rel="noopener noreferrer"&gt;https://www.notion.so/StockPulse-Home-Page-3221879420d180c785d1eb25e8956ce4&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Show us the code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/Safvan-tsy" rel="noopener noreferrer"&gt;
        Safvan-tsy
      &lt;/a&gt; / &lt;a href="https://github.com/Safvan-tsy/stockpulse" rel="noopener noreferrer"&gt;
        stockpulse
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Using Notion MCP to Screen 5,000+ Stocks and Write AI Research Reports
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;StockPulse India 📈&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;AI-Powered Indian Stock Intelligence on Notion&lt;/strong&gt; — Built for the &lt;a href="https://dev.to/challenges/notion-2026-03-04" rel="nofollow"&gt;Notion MCP Challenge&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;StockPulse takes daily price and delivery data from NSE &amp;amp; BSE, screens 5000+ stocks through &lt;strong&gt;12 battle-tested fundamental conditions&lt;/strong&gt;, and uses a &lt;strong&gt;dual-MCP architecture&lt;/strong&gt; — the official &lt;strong&gt;Notion MCP&lt;/strong&gt; for workspace I/O plus a custom &lt;strong&gt;StockPulse MCP&lt;/strong&gt; for domain computation — to generate research reports, detect anomalies, and maintain a smart watchlist — all centralized in Notion.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;What It Does&lt;/h2&gt;
&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data Pipeline&lt;/strong&gt; — Downloads BhavCopy + delivery data from NSE/BSE, or reads from pre-built Excel workbooks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;12-Condition Screener&lt;/strong&gt; — Filters stocks for: profitability (PE, EPS), growth (sales, profit YoY), governance (promoter pledging), financial health (debt/equity, current ratio, ROCE)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notion as Single Source of Truth&lt;/strong&gt; — 5 linked databases: Stocks Master, Daily Prices, Screener Results, Watchlist, AI Reports&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dual-MCP AI Intelligence&lt;/strong&gt; — The official Notion MCP (&lt;code&gt;https://mcp.notion.com/mcp&lt;/code&gt;) handles all Notion reads/writes…&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/Safvan-tsy/stockpulse" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;
 
&lt;h2&gt;
  
  
  How I Used Notion MCP
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://mcp.notion.com/mcp" rel="noopener noreferrer"&gt;Notion MCP&lt;/a&gt; is the &lt;strong&gt;core I/O layer&lt;/strong&gt; of this project. The AI agent uses it for every interaction with the Notion workspace:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reading data via Notion MCP:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;notion-search&lt;/code&gt; and &lt;code&gt;notion-fetch&lt;/code&gt; to find and retrieve stock pages from the Stocks Master database&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;query-a-database-view&lt;/code&gt; to fetch screened stocks, price history, and watchlist entries with filters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Writing results via Notion MCP:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;create-a-page&lt;/code&gt; to publish AI-generated research reports into the AI Reports database&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;create-a-page&lt;/code&gt; to add stocks to the Watchlist database with notes and status&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;update-a-page&lt;/code&gt; to set AI Ratings (Strong Buy / Buy / Hold / Avoid) on stock pages&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The custom StockPulse MCP server complements Notion MCP&lt;/strong&gt; as a stateless computation engine — it contains zero Notion SDK calls. It receives stock data (fetched by the AI via Notion MCP) as JSON input and returns analysis results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;screen_stock&lt;/code&gt;: Applies the 12 screening conditions and computes a weighted quality score (0–100)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;detect_anomalies&lt;/code&gt;: Identifies stocks with Piotroski ≥7, promoter holding changes, and high delivery %&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;compare_sector&lt;/code&gt;: Ranks a stock against sector peers on ROCE, PE, debt, and other metrics&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;generate_report_content&lt;/code&gt;: Formats analysis into structured markdown for the AI to save via Notion MCP&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The workflow loop:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AI fetches data from Notion databases → &lt;strong&gt;Notion MCP&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;AI passes data to screening/analysis → &lt;strong&gt;StockPulse MCP&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;AI writes reports, ratings, watchlist entries back → &lt;strong&gt;Notion MCP&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Human reviews in Notion UI, adds notes → AI reads them next cycle → &lt;strong&gt;Notion MCP&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This creates a true human-in-the-loop system where the Python pipeline crunches the hard numbers, Notion MCP provides seamless workspace access, StockPulse MCP adds domain intelligence, and Notion organizes it all beautifully for the investor to review and act on.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>notionchallenge</category>
      <category>mcp</category>
      <category>ai</category>
    </item>
    <item>
      <title>What If Your A11y Linter Could Actually Fix the Bugs It Found?</title>
      <dc:creator>Muhammed Safvan</dc:creator>
      <pubDate>Sun, 15 Feb 2026 15:54:08 +0000</pubDate>
      <link>https://forem.com/safvantsy/what-if-your-a11y-linter-could-actually-fix-the-bugs-it-found-4coe</link>
      <guid>https://forem.com/safvantsy/what-if-your-a11y-linter-could-actually-fix-the-bugs-it-found-4coe</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/github-2026-01-21"&gt;GitHub Copilot CLI Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;What if your linter could actually &lt;em&gt;fix&lt;/em&gt; the problems it found?&lt;/p&gt;

&lt;p&gt;That's &lt;strong&gt;a11y-pilot&lt;/strong&gt; - a CLI that scans your frontend code for accessibility violations, then spawns GitHub Copilot CLI to fix each one. Not "here's a suggestion". it literally invokes &lt;code&gt;copilot --prompt&lt;/code&gt; with a crafted fix instruction and lets the AI refactor your code in place.&lt;/p&gt;

&lt;p&gt;I pointed it at a file with 12 accessibility issues. Ran &lt;code&gt;a11y-pilot fix&lt;/code&gt;. Every single issue was resolved — &lt;code&gt;&amp;lt;div onClick&amp;gt;&lt;/code&gt; became &lt;code&gt;&amp;lt;button&amp;gt;&lt;/code&gt;, empty &lt;code&gt;&amp;lt;img&amp;gt;&lt;/code&gt; got meaningful alt text inferred from the filename, unlabeled inputs got proper &lt;code&gt;aria-label&lt;/code&gt;. Re-scanning the file: &lt;strong&gt;0 issues&lt;/strong&gt;. No manual edits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/rbTMGW2PRt0"&gt;
  &lt;/iframe&gt;


&lt;br&gt;
&lt;strong&gt;npm:&lt;/strong&gt; &lt;a href="https://www.npmjs.com/package/a11y-pilot" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/a11y-pilot&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/Safvan-tsy/a11y-pilot" rel="noopener noreferrer"&gt;https://github.com/Safvan-tsy/a11y-pilot&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  How it works
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Scan → Detect → Prompt Engineer → Copilot CLI Fixes → Done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scan&lt;/strong&gt; - Point it at any directory or file. It walks your project and parses JSX, TSX, HTML, Vue, Svelte, and Astro files using Babel AST (for JSX/TSX) and htmlparser2 (for HTML-like templates).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Detect&lt;/strong&gt; - 15 accessibility rules run against every parsed element, checking for WCAG violations across 5 categories.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Auto-fix&lt;/strong&gt; - For each issue, a11y-pilot builds a precise, context-rich prompt and spawns &lt;code&gt;copilot --prompt &amp;lt;text&amp;gt; --allow-all-tools&lt;/code&gt;. Copilot CLI reads the file, understands the surrounding code, and applies the minimum diff needed. No blind find-and-replace actual AI-driven refactoring.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiucbbtiornqucc4im4at.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiucbbtiornqucc4im4at.png" alt="final status image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  The Copilot CLI bridge — the core of the project
&lt;/h3&gt;

&lt;p&gt;This is not just a scanner that &lt;em&gt;suggests&lt;/em&gt; fixes. The entire point is that Copilot CLI is the execution engine. When you run &lt;code&gt;a11y-pilot fix ./src&lt;/code&gt;, here's what happens under the hood:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────┐     ┌──────────────────┐     ┌─────────────────┐
│   a11y-pilot    │────▶│  Issue detected   │────▶│  Build prompt   │
│   scanner       │     │  (rule engine)    │     │  (context-rich) │
└─────────────────┘     └──────────────────┘     └────────┬────────┘
                                                          │
                                                          ▼
┌─────────────────┐     ┌──────────────────┐     ┌─────────────────┐
│   File fixed!   │◀────│  Copilot applies  │◀────│  copilot CLI    │
│   ✔ Report      │     │  the fix          │     │  invoked        │
└─────────────────┘     └──────────────────┘     └─────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each rule carries a &lt;code&gt;copilotPrompt&lt;/code&gt; field — a carefully crafted instruction that tells Copilot CLI exactly what's wrong and how to fix it. For example, the &lt;code&gt;no-div-button&lt;/code&gt; rule generates prompts like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"In file src/Hero.tsx at line 20, fix this accessibility issue: &lt;code&gt;&amp;lt;div&amp;gt;&lt;/code&gt; has a click handler but is missing role and tabIndex. Replace this &lt;code&gt;&amp;lt;div&amp;gt;&lt;/code&gt; element with a native &lt;code&gt;&amp;lt;button&amp;gt;&lt;/code&gt; element. Move the onClick handler to the button. Remove any cursor: pointer styling (buttons have it by default). Only modify the minimum code necessary."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Copilot CLI then reads the full file context, understands the surrounding JSX structure, and makes intelligent fixes — not just string replacements, but real refactoring. It adds meaningful alt text based on image filenames, converts &lt;code&gt;&amp;lt;div onClick&amp;gt;&lt;/code&gt; to proper &lt;code&gt;&amp;lt;button&amp;gt;&lt;/code&gt; elements, wraps navigation links in &lt;code&gt;&amp;lt;nav&amp;gt;&lt;/code&gt; with appropriate &lt;code&gt;aria-label&lt;/code&gt;, and more.&lt;/p&gt;

&lt;h3&gt;
  
  
  Category Dashboard
&lt;/h3&gt;

&lt;p&gt;After scanning, a11y-pilot renders a visual breakdown dashboard right in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  ✖ Found 46 issues (35 errors, 11 warnings) in 4 files (5 scanned)

  📊 Issue Breakdown
  ──────────────────────────────────────────────────
   ♿ Accessibility       ████████░░░░░░░░░░░░   19 (41%)  19E
   🏗️ Semantic HTML      █████░░░░░░░░░░░░░░░   11 (24%)  4E 7W
   ⌨️ Keyboard            ███░░░░░░░░░░░░░░░░░    7 (15%)  5E 2W
   🏷️ ARIA               ██░░░░░░░░░░░░░░░░░░    5 (11%)  5E
   👆 Interaction         ██░░░░░░░░░░░░░░░░░░    4 (9%)   2E 2W
  ──────────────────────────────────────────────────
  14 rules triggered: img-alt, form-label, no-div-button, ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives you an instant snapshot of where your accessibility debt lives — is it ARIA misuse? Keyboard traps? Missing semantics? You know exactly where to focus.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Experience with GitHub Copilot CLI
&lt;/h2&gt;

&lt;p&gt;GitHub Copilot CLI was central to this project in two distinct ways:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Copilot CLI as the product's engine
&lt;/h3&gt;

&lt;p&gt;The marquee feature of a11y-pilot is &lt;code&gt;--auto-fix&lt;/code&gt;. When triggered, it spawns &lt;code&gt;copilot --prompt &amp;lt;text&amp;gt; --allow-all-tools&lt;/code&gt; for each detected issue. I discovered the &lt;code&gt;--prompt&lt;/code&gt; flag enables non-interactive mode, and &lt;code&gt;--allow-all-tools&lt;/code&gt; auto-approves tool use — this combination is what makes programmatic Copilot CLI invocation possible.&lt;/p&gt;

&lt;p&gt;The quality of fixes was surprisingly excellent. Given a file with &lt;code&gt;&amp;lt;div onClick={() =&amp;gt; alert('clicked')}&amp;gt;&lt;/code&gt;, Copilot CLI didn't just slap a &lt;code&gt;role="button"&lt;/code&gt; on it — it actually converted the entire element to a &lt;code&gt;&amp;lt;button&amp;gt;&lt;/code&gt;, moved the handler, and preserved the className. For empty &lt;code&gt;&amp;lt;a href="/profile"&amp;gt;&amp;lt;/a&amp;gt;&lt;/code&gt;, it added &lt;code&gt;aria-label="Profile"&lt;/code&gt; — inferring the label from the URL. These aren't template fixes; they're context-aware refactoring.&lt;/p&gt;

&lt;p&gt;The key insight was &lt;strong&gt;prompt engineering per rule&lt;/strong&gt;. Each of the 15 rules carries a &lt;code&gt;copilotPrompt&lt;/code&gt; field — a precisely crafted instruction that gives Copilot CLI enough context to understand the problem and the expected fix pattern, without being so prescriptive that it loses the ability to adapt to surrounding code. This prompt design is what turns a11y-pilot from "accessibility linter" into "accessibility linter + AI-powered fixer."&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Copilot CLI as a development tool
&lt;/h3&gt;

&lt;p&gt;Beyond the product itself, Copilot CLI was my primary development companion throughout the build:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Parser logic&lt;/strong&gt; — Writing Babel AST traversal for JSX elements with normalized attribute handling required getting a lot of edge cases right (spread props, expression containers, member expressions). Copilot CLI helped iterate on the visitor pattern and handle the &lt;code&gt;@babel/traverse&lt;/code&gt; ESM default export quirk (&lt;code&gt;_traverse.default || _traverse&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rule implementation&lt;/strong&gt; — For each of the 15 rules, I used Copilot CLI to reference WCAG criteria and ensure the &lt;code&gt;check()&lt;/code&gt; functions handle edge cases correctly — like not flagging &lt;code&gt;&amp;lt;a&amp;gt;&lt;/code&gt; tags without &lt;code&gt;href&lt;/code&gt; in the &lt;code&gt;aria-hidden-focus&lt;/code&gt; rule, or skipping &lt;code&gt;input[type="hidden"]&lt;/code&gt; in form-label checks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Debugging&lt;/strong&gt; — When the copilot-bridge initially used &lt;code&gt;shell: true&lt;/code&gt; and hit the Node.js DEP0190 deprecation warning, Copilot CLI helped me switch to direct binary resolution with &lt;code&gt;execFileSync('which', ['copilot'])&lt;/code&gt; and proper &lt;code&gt;spawn()&lt;/code&gt; without shell.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What impressed me
&lt;/h3&gt;

&lt;p&gt;The thing that stood out most was Copilot CLI's ability to handle &lt;strong&gt;file-level context&lt;/strong&gt;. When I pointed it at a file with 12 accessibility issues and said "fix the &lt;code&gt;&amp;lt;div onClick&amp;gt;&lt;/code&gt; on line 20," it didn't break the other 11 problematic lines. It made surgical, minimal edits. That's the property that made the auto-fix feature viable — I could confidently fix issues one-by-one or in batches without worrying about cascading breakage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project links
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;npm:&lt;/strong&gt; &lt;code&gt;npm install -g a11y-pilot&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/Safvan-tsy/a11y-pilot" rel="noopener noreferrer"&gt;https://github.com/Safvan-tsy/a11y-pilot&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Try it now:&lt;/strong&gt; &lt;code&gt;npx a11y-pilot scan ./src&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devchallenge</category>
      <category>githubchallenge</category>
      <category>cli</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>🚀 Why Everyone Uses localhost:3000 - The History of Dev Ports (3000, 8000, 8080, 5173)</title>
      <dc:creator>Muhammed Safvan</dc:creator>
      <pubDate>Mon, 20 Oct 2025 10:25:36 +0000</pubDate>
      <link>https://forem.com/safvantsy/why-everyone-uses-localhost3000-the-history-of-dev-ports-3000-8000-8080-5173-llg</link>
      <guid>https://forem.com/safvantsy/why-everyone-uses-localhost3000-the-history-of-dev-ports-3000-8000-8080-5173-llg</guid>
      <description>&lt;h4&gt;
  
  
  TL;DR:
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;Ever wondered why your dev servers always run on localhost:3000 or localhost:5173?&lt;br&gt;
These ports have fascinating histories that trace back through decades of developer habits, from Java and Python to Node.js and Vite. Let’s unpack the stories behind them.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  💡 What Is a Port, Anyway?
&lt;/h3&gt;

&lt;p&gt;Think of your computer like an office building, every port is a numbered door that leads to a specific “room” (or service).&lt;/p&gt;

&lt;p&gt;When you visit &lt;code&gt;localhost:3000&lt;/code&gt;, you’re basically knocking on door #3000 and asking,&lt;/p&gt;

&lt;p&gt;“Hey, can you show me my app?”&lt;/p&gt;

&lt;p&gt;There are 65,535 possible doors (ports). Here’s how they’re grouped:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Range&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Example&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0–1023&lt;/td&gt;
&lt;td&gt;System / Reserved&lt;/td&gt;
&lt;td&gt;HTTP(80),HTTPS(443),SSH(22)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1024–49151&lt;/td&gt;
&lt;td&gt;User / Registered&lt;/td&gt;
&lt;td&gt;3000, 8000, 8080&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;49152–65535&lt;/td&gt;
&lt;td&gt;Dynamic / Private&lt;/td&gt;
&lt;td&gt;Temporary OS connections&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;So yes, port &lt;code&gt;3000&lt;/code&gt; is just one of tens of thousands of valid options.&lt;/p&gt;




&lt;h3&gt;
  
  
  ⚙️ Port 3000 - The Node.js Default
&lt;/h3&gt;

&lt;p&gt;When Node.js and Express.js exploded in the early 2010s, the official docs used this snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.listen(3000, () =&amp;gt; console.log('Server running on port 3000'));

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That one line shaped an entire generation of developers.&lt;br&gt;
Tutorials, bootcamps, and boilerplates copied it word-for-word.&lt;/p&gt;

&lt;p&gt;Then React came along… and reused it.&lt;br&gt;
Next.js came along… and reused it again.&lt;/p&gt;

&lt;p&gt;💬 Fun fact: There’s no special reason for port &lt;code&gt;3000&lt;/code&gt; - it was just arbitrary and unclaimed.&lt;br&gt;
But familiarity is powerful. Now, 3000 is the unofficial “Hello World” port of web dev.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Update (Oct 23, 2025): A few readers pointed out that the use of port 3000 actually originated from Ruby on Rails, and frameworks like Express.js later adopted the same convention. Thanks for the comments, great catch!&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h3&gt;
  
  
  🐍 Port 8000 - The Python Classic
&lt;/h3&gt;

&lt;p&gt;Long before Node.js, Python devs were already spinning up local servers with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 -m http.server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And what port did that use by default?&lt;br&gt;
👉 8000&lt;/p&gt;

&lt;p&gt;No deep reasoning, just a round, safe number above 1024 that didn’t require root privileges.&lt;br&gt;
Frameworks like Django adopted it too:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Starting development server at http://127.0.0.1:8000/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So for Python developers, &lt;code&gt;8000&lt;/code&gt; became the go-to number for “I’m just testing something locally.”&lt;/p&gt;




&lt;h3&gt;
  
  
  ☕ Port 8080 - Java’s Legendary Port
&lt;/h3&gt;

&lt;p&gt;Back in the 1990s, running a web server on port 80 (the official HTTP port) required root access.&lt;br&gt;
So Java developers working on Apache Tomcat and Jetty picked something clever:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;80 =&amp;gt; 8080 (double eighty)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It looked similar, worked without admin rights, and became the perfect HTTP alternative.&lt;/p&gt;

&lt;p&gt;To this day, Java servers (like Spring Boot) still default to &lt;code&gt;8080&lt;/code&gt;. It’s now a symbol of “serious backend work.”&lt;/p&gt;


&lt;h3&gt;
  
  
  ⚡ Port 5173 - The Vite Generation
&lt;/h3&gt;

&lt;p&gt;Fast forward to the 2020s.&lt;br&gt;
Evan You (creator of Vue.js) introduced Vite, a blazing-fast build tool for frontend frameworks.&lt;/p&gt;

&lt;p&gt;They needed a default port and instead of picking a boring number, then added this Easter egg:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;51 = “VI” (Roman number 'V' =&amp;gt; 5)&lt;br&gt;
73 = “TE”&lt;br&gt;
5173 =&amp;gt; “VITE”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Run &lt;code&gt;npm run dev&lt;/code&gt; in a Vite project, and you’ll see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;VITE v5.0 ready in 220 ms
Local: http://localhost:5173/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s geeky, clever, and memorable and that’s why you’ll see 5173 everywhere now.&lt;/p&gt;




&lt;p&gt;🧠 Are You Using Localhost “Wrong”?&lt;/p&gt;

&lt;p&gt;Not wrong, but maybe limiting yourself.&lt;br&gt;
Many devs stick to 3000 and panic when they get:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: Port 3000 already in use
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In reality, you can safely use any port up to 49151.&lt;br&gt;
Try something fun:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm run dev -- --port=42069&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;or in Vite:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;vite --port=13337&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You’ll avoid conflicts and earn bonus nerd points.&lt;/p&gt;


&lt;h4&gt;
  
  
  🕰️ A Fun Bit of Dev History
&lt;/h4&gt;

&lt;p&gt;Each port tells a story:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;8080 - Java’s clever HTTP workaround&lt;br&gt;
8000 - Python’s practical simplicity&lt;br&gt;
3000 - Node’s accidental tradition&lt;br&gt;
5173 - Vite’s self-referential Easter egg&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;From the 1990s to today, these numbers have quietly shaped how millions of developers work every day.&lt;/p&gt;
&lt;h3&gt;
  
  
  ✨ The Takeaway
&lt;/h3&gt;

&lt;p&gt;Next time you spin up a dev server and see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Local: http://localhost:3000/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remember you’re tapping into a piece of developer history that spans decades of innovation.&lt;/p&gt;

&lt;p&gt;So the next time port 3000 is busy, don’t just kill the process.&lt;br&gt;
Pick another number. Maybe even make it your signature port. 😉&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You can safely use any port between 1024 and 49151 - 3000 isn’t the only game in town!&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>node</category>
      <category>python</category>
    </item>
    <item>
      <title>Node.js Performance Optimization : Cluster Module</title>
      <dc:creator>Muhammed Safvan</dc:creator>
      <pubDate>Sun, 11 May 2025 09:28:01 +0000</pubDate>
      <link>https://forem.com/safvantsy/nodejs-performance-optimization-cluster-module-1dap</link>
      <guid>https://forem.com/safvantsy/nodejs-performance-optimization-cluster-module-1dap</guid>
      <description>&lt;p&gt;In 2025, we're still seeing a curious phenomenon in production: powerful multi-core servers running Node.js applications on just a single CPU core. This isn't because Node.js can't utilize multiple cores it's because many developers have forgotten (or never learned) about Node's built-in clustering capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Single-Core Problem
&lt;/h2&gt;

&lt;p&gt;Node.js uses a single-threaded event loop model by default. This is excellent for handling asynchronous operations efficiently but means your application can only utilize one CPU core no matter how many cores your server has. In an era of 16, 32, or even 64-core machines, this represents a significant waste of computing resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Overlooked Solution: Node's Cluster Module
&lt;/h2&gt;

&lt;p&gt;While developers often reach for external solutions like Docker, Kubernetes, or PM2, many overlook that Node.js ships with a native clustering solution: the &lt;code&gt;cluster&lt;/code&gt; module.&lt;/p&gt;

&lt;p&gt;This powerful module allows you to create worker processes that share the same server port, effectively distributing the workload across all available CPU cores. The primary (master) process is responsible for spawning workers and managing their lifecycle, while the workers handle the actual request processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation in Under 20 Lines of Code
&lt;/h2&gt;

&lt;p&gt;Here's how you can implement clustering in your Node.js application with minimal code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cluster&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;cluster&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;numCPUs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;os&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;cpus&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isPrimary&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Primary &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pid&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; is running`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Fork workers, one per CPU&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;numCPUs&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fork&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;exit&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;signal&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Worker &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pid&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; died`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="c1"&gt;// Replace the dead worker&lt;/span&gt;
    &lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fork&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Workers share the TCP connection&lt;/span&gt;
  &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createServer&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;writeHead&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;end&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Hello from Node.js cluster&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Worker &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pid&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; started`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simple implementation:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Detects the number of CPU cores available&lt;/li&gt;
&lt;li&gt;Spawns one worker process per core&lt;/li&gt;
&lt;li&gt;Automatically restarts workers if they die&lt;/li&gt;
&lt;li&gt;Allows all workers to share port 8000&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Cluster vs. Worker Threads vs. External Tools
&lt;/h2&gt;

&lt;p&gt;It's important to understand when to use each performance optimization strategy:&lt;/p&gt;

&lt;h3&gt;
  
  
  Cluster Module
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Best for&lt;/strong&gt;: I/O-bound workloads (most web applications)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How it works&lt;/strong&gt;: Spawns multiple processes that share server ports&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advantages&lt;/strong&gt;: Zero dependencies, simple implementation, full isolation between workers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disadvantages&lt;/strong&gt;: Higher memory usage than worker threads&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Worker Threads
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Best for&lt;/strong&gt;: CPU-intensive tasks (calculations, image processing)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How it works&lt;/strong&gt;: Creates threads within the same process&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advantages&lt;/strong&gt;: Lighter weight than full processes, shared memory&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disadvantages&lt;/strong&gt;: Not ideal for I/O-bound applications&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  PM2
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What it is&lt;/strong&gt;: A process manager that wraps Node's cluster module with additional features&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advantages&lt;/strong&gt;: Adds monitoring, logs, and easier management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disadvantages&lt;/strong&gt;: External dependency, additional complexity&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Docker &amp;amp; Kubernetes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What they are&lt;/strong&gt;: Container and orchestration platforms&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advantages&lt;/strong&gt;: Full infrastructure management, auto-scaling, self-healing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disadvantages&lt;/strong&gt;: Significant complexity, overhead, learning curve&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Performance Impact
&lt;/h2&gt;

&lt;p&gt;Let's look at some benchmark data comparing a single-process Node.js server versus a clustered implementation on a 16-core machine:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Configuration&lt;/th&gt;
&lt;th&gt;Requests/sec&lt;/th&gt;
&lt;th&gt;Latency (avg)&lt;/th&gt;
&lt;th&gt;CPU Usage&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Single Process&lt;/td&gt;
&lt;td&gt;8,500&lt;/td&gt;
&lt;td&gt;120ms&lt;/td&gt;
&lt;td&gt;100% (1 core)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Clustered (16 workers)&lt;/td&gt;
&lt;td&gt;112,000&lt;/td&gt;
&lt;td&gt;35ms&lt;/td&gt;
&lt;td&gt;95% (all cores)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These numbers represent a typical I/O-bound REST API with database connections and moderate business logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cloud Context
&lt;/h2&gt;

&lt;p&gt;While some argue that Kubernetes or serverless architectures eliminate the need for application-level clustering, consider these points:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Cost efficiency&lt;/strong&gt;: Running efficiently on fewer, larger instances often costs less than many small containers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reduced complexity&lt;/strong&gt;: Native clustering requires no orchestration tooling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hybrid approach&lt;/strong&gt;: You can use both clustering within containers AND container orchestration for different scaling needs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Remember that in cloud environments, efficient resource utilization directly impacts your bottom line.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation Tips
&lt;/h2&gt;

&lt;p&gt;When implementing clustering in production:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Connection pooling&lt;/strong&gt;: Ensure database connections are properly pooled across workers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sticky sessions&lt;/strong&gt;: If using session data, implement sticky sessions or session stores&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Graceful shutdown&lt;/strong&gt;: Handle SIGTERM signals properly to avoid dropped connections&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring&lt;/strong&gt;: Track worker health and restart failed workers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory management&lt;/strong&gt;: Watch for memory leaks that could affect all workers&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Node's cluster module represents one of the most straightforward ways to dramatically improve application performance and resource utilization. Before reaching for complex orchestration tools, consider whether this simple, built-in solution might meet your needs.&lt;/p&gt;

&lt;p&gt;The next time you deploy a Node.js application, ask yourself: "Am I leaving performance on the table by running single-threaded?" Utilizing all available CPU cores isn't just about raw performance it is about responsible engineering and resource efficiency.&lt;/p&gt;

&lt;p&gt;True expertise means understanding your runtime environment deeply before adding layers of abstraction. Sometimes, the most elegant solution is already built into the platform you're using.&lt;/p&gt;

&lt;p&gt;Are you still running single-threaded Node applications? It might be time to reconsider your approach.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>devops</category>
      <category>node</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Accidentally Committed Secrets? A Simple Git Fix Is Not Enough!</title>
      <dc:creator>Muhammed Safvan</dc:creator>
      <pubDate>Sun, 23 Mar 2025 02:36:51 +0000</pubDate>
      <link>https://forem.com/safvantsy/accidentally-committed-secrets-a-simple-git-fix-is-not-enough-3gem</link>
      <guid>https://forem.com/safvantsy/accidentally-committed-secrets-a-simple-git-fix-is-not-enough-3gem</guid>
      <description>&lt;p&gt;It happens to the best of us. A moment of distraction, and suddenly we've committed a .env file, an API key, or another secrets file to our Git repository. If you think simply removing the file and committing again will fix the issue, think again. The file will still exist in your Git commit history, posing a security risk. &lt;/p&gt;

&lt;p&gt;In this article, we’ll explore how to effectively remove secrets from Git history and mitigate potential security risks.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;When we commit a file to a Git repository and then remove it in a subsequent commit, the file isn't actually deleted from the repository's history. Git is designed to maintain a complete historical record, which means anyone with access to our repository can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Browse through previous commits&lt;/li&gt;
&lt;li&gt;View the contents of those commits, including our secrets&lt;/li&gt;
&lt;li&gt;Extract sensitive information with simple Git commands
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# For example, someone could do this to see our secrets:
git checkout &amp;lt;commit-hash-with-secrets&amp;gt;
cat path/to/secrets/file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scenario 1: Haven't Pushed to Remote Yet&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the commit with secrets is still only in our local repository. This is the easiest scenario to fix. :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;git reset&lt;/code&gt; to undo the commit&lt;/li&gt;
&lt;li&gt;Remove the sensitive data&lt;/li&gt;
&lt;li&gt;Commit again with the sanitized files
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Undo the last commit, but keep the changes in our working directory 
git reset --soft HEAD~1 

# Alternatively, if we want to discard the changes completely 
git reset --hard HEAD~1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Scenario 2: Already Pushed to Remote&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To truly remove sensitive information, we need to rewrite our Git history to completely eliminate the file from all commits. Here's how to do it effectively:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Identify the Secret Files&lt;/strong&gt;&lt;br&gt;
First, clearly identify which files contain sensitive information that needs to be removed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Remove the Files from Git History&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1) Using git filter-branch&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git filter-branch --force --index-filter "git rm --cached --ignore-unmatch PATH_TO_SECRET_FILE" --prune-empty --tag-name-filter cat ----all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;PATH_TO_SECRET_FILE&lt;/code&gt; with the path to file containing secrets.&lt;/p&gt;

&lt;p&gt;For replacing specific strings (like API keys) while keeping the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git filter-branch --tree-filter "find . -type f -name '*.config' -exec sed -i 's/YOUR_API_KEY/PLACEHOLDER_KEY/g' {} \;" HEAD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2) Using filter-repo&lt;/p&gt;

&lt;p&gt;git-filter-repo isn't built-in to git itself&lt;br&gt;
First, install git-filter-repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# For Python users
pip install git-filter-repo

# For macOS
brew install git-filter-repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git filter-repo --path path/to/secrets/file --invert-paths
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;for more info regarding git-filter-repo refer &lt;a href="https://andrewlock.net/rewriting-git-history-simply-with-git-filter-repo/" rel="noopener noreferrer"&gt;this&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Push those Changes&lt;/strong&gt;&lt;br&gt;
After cleaning history, we need to force push the changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push origin --force --all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;CAUTION: Force Pushing Is Irreversible ⚠️&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Additional Recommendations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Revoke any exposed API keys or tokens&lt;/li&gt;
&lt;li&gt;Add sensitive files to &lt;code&gt;.gitignore&lt;/code&gt;: Ensure secret files are never tracked.&lt;/li&gt;
&lt;li&gt;Use Environment Variables: Store secrets in environment variables instead of committing them.&lt;/li&gt;
&lt;li&gt;Use Git Hooks: Automate pre-commit hooks to prevent secret files from being added.&lt;/li&gt;
&lt;li&gt;Use &lt;a href="https://docs.github.com/en/code-security/secret-scanning/enabling-secret-scanning-features" rel="noopener noreferrer"&gt;GitHub’s Secret Scanning&lt;/a&gt; If you're using Github or use Secret push protection for &lt;a href="https://docs.gitlab.com/user/application_security/secret_detection/secret_push_protection/" rel="noopener noreferrer"&gt;Gitlab&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: Remember that if your secret was exposed for any period of time, even briefly, you should consider it compromised and rotate it immediately.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>programming</category>
      <category>git</category>
      <category>tutorial</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
