<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Bishwas Bhandari</title>
    <description>The latest articles on Forem by Bishwas Bhandari (@developerbishwas).</description>
    <link>https://forem.com/developerbishwas</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/developerbishwas"/>
    <language>en</language>
    <item>
      <title>I built a LinkedIn MCP server for Claude in a weekend. Here's the Python + Playwright pattern.</title>
      <dc:creator>Bishwas Bhandari</dc:creator>
      <pubDate>Mon, 04 May 2026 03:43:23 +0000</pubDate>
      <link>https://forem.com/developerbishwas/i-built-a-linkedin-mcp-server-for-claude-in-a-weekend-heres-the-python-playwright-pattern-59b5</link>
      <guid>https://forem.com/developerbishwas/i-built-a-linkedin-mcp-server-for-claude-in-a-weekend-heres-the-python-playwright-pattern-59b5</guid>
      <description>&lt;p&gt;&lt;em&gt;The LinkedIn Marketing API is enterprise-gated. Phantombuster wants $69/mo. The cookie-paste shortcut on Stack Overflow breaks within four requests. SuperMCP solves this with Playwright and your real Chrome session. Here's the architecture plus the prompt patterns I use weekly.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;I run a forum at &lt;a href="https://webmatrices.com" rel="noopener noreferrer"&gt;webmatrices.com&lt;/a&gt; where indie founders post about their SaaS bottlenecks. A lot of the same complaints surface on LinkedIn, in the comments under bigger founder posts. I wanted Claude to cross-reference both. The cleanest path turned out to be a LinkedIn MCP server that reuses your Chrome login session via Playwright. No API key, no third-party cloud, nothing leaves your laptop.&lt;/p&gt;

&lt;p&gt;Code's at &lt;a href="https://github.com/Bishwas-py/supermcp" rel="noopener noreferrer"&gt;github.com/Bishwas-py/supermcp&lt;/a&gt;. MIT-licensed. Install is &lt;code&gt;pip install supermcp &amp;amp;&amp;amp; supermcp setup&lt;/code&gt;. The LinkedIn-specific docs are &lt;a href="https://webmatrices.com/supermcp/linkedin-mcp" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you just want the working setup, skip to the architecture section. If you want to know why the cookie-paste shortcut fails, keep reading.&lt;/p&gt;

&lt;h2&gt;
  
  
  The naïve approach (and why it breaks)
&lt;/h2&gt;

&lt;p&gt;Before SuperMCP existed, I did the dumb thing one weekend. I opened LinkedIn in Chrome, opened devtools, copied my &lt;code&gt;li_at&lt;/code&gt; cookie out of &lt;code&gt;Application &amp;gt; Cookies&lt;/code&gt;, and pasted it into a Python script using &lt;code&gt;httpx&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;cookies&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;li_at&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AQED...redacted...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;httpx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://www.linkedin.com/voyager/api/me&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cookies&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;cookies&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That worked. I got a clean JSON response with my own profile data. I felt clever for about ninety seconds.&lt;/p&gt;

&lt;p&gt;Then I tried the search endpoint. &lt;code&gt;voyager/api/graphql/...&lt;/code&gt; returned 999 Forbidden. Tried again. 999 again. Within four requests, LinkedIn sent a 2FA email to my real address: &lt;em&gt;"We noticed an unusual sign-in attempt."&lt;/em&gt; No sign-in had happened. The script was using my session cookie. The challenge fired anyway.&lt;/p&gt;

&lt;p&gt;The reason: LinkedIn's anti-abuse system isn't just looking at cookies. It's looking at TLS fingerprints, JA3 signatures, the order of HTTP/2 headers, the &lt;code&gt;User-Agent&lt;/code&gt;/Accept-Language combo, and a few dozen other things that make a real Chrome request look like a real Chrome request. &lt;code&gt;httpx&lt;/code&gt; doesn't lie about being &lt;code&gt;httpx&lt;/code&gt;, even with a stolen cookie. Once the fingerprint was wrong, the cookie became evidence rather than authentication.&lt;/p&gt;

&lt;p&gt;So copying cookies into a script is the &lt;em&gt;correct&lt;/em&gt; mental model and the &lt;em&gt;wrong&lt;/em&gt; implementation. The fix is: don't copy the cookie. Make a real browser do the request. Which is what a headless Chromium with persistent storage state actually is.&lt;/p&gt;

&lt;h2&gt;
  
  
  The architecture that worked
&lt;/h2&gt;

&lt;p&gt;Three layers, nothing fancy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Claude / Cursor  ←stdio→  MCP server (Python + FastMCP)
                            │
                            └── Playwright (headless Chromium
                                  with persistent storage state)
                                  │
                                  └── linkedin.com (your real session)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The trick is &lt;strong&gt;persistent storage state&lt;/strong&gt;. Playwright lets you point a Chromium instance at a directory containing cookies + localStorage + IndexedDB from a previous session. That session can be one Playwright created earlier (run it once, log in by hand, save state to JSON), or in some configs you can clone a directory off your real Chrome profile. Either way, the resulting browser is &lt;em&gt;real&lt;/em&gt;. The TLS fingerprint is real. The JA3 is real. The header order is real. LinkedIn sees a logged-in user opening their search page in Chrome, which is what's actually happening.&lt;/p&gt;

&lt;p&gt;After that, FastMCP wraps the tools in about 20 lines of glue per tool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastmcp&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FastMCP&lt;/span&gt;

&lt;span class="n"&gt;mcp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;FastMCP&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;supermcp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@mcp.tool&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;linkedin_search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Search LinkedIn posts. Returns markdown with author, reactions, URL.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new_page&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;goto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://www.linkedin.com/search/results/content/?keywords=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c1"&gt;# parse results from the rendered DOM
&lt;/span&gt;    &lt;span class="bp"&gt;...&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;format_as_markdown&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few things I learned the hard way once I had it working:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Return markdown, not JSON.&lt;/strong&gt; Models read markdown faster (about 30% fewer tokens, in my testing on Claude 4.6) and they chain follow-up calls more reliably when IDs and URLs are surfaced as plain text rather than buried in nested objects. I have a small format helper that always emits stable IDs in &lt;code&gt;**id:** abc123&lt;/code&gt; form so Claude can call &lt;code&gt;linkedin_post_comments(post_id="abc123")&lt;/code&gt; after a search.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Be honest in tool descriptions.&lt;/strong&gt; Claude routes tool calls based on the docstring you give the MCP server. If your description is vague, Claude will use it for the wrong intent. &lt;code&gt;linkedin_search&lt;/code&gt; is &lt;em&gt;post search by keyword&lt;/em&gt;, not people search, so the docstring says exactly that. People search needs a different tool, which I don't ship because LinkedIn flags it harder.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cap the per-day request budget at the MCP layer.&lt;/strong&gt; Not at the LinkedIn layer. The reason is that Claude will sometimes loop, burning through 40 search calls trying to refine a query that was wrong from the start. Catching that at the MCP server (one shared counter, not per-tool) is much cheaper than letting LinkedIn catch it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The three tools, and the prompts that actually work
&lt;/h2&gt;

&lt;p&gt;The whole LinkedIn surface is three tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;linkedin_search&lt;/code&gt; (the workhorse)
&lt;/h3&gt;

&lt;p&gt;This is what I use 90% of the time. Keyword search across all public LinkedIn posts. Pain-point mining looks like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Search LinkedIn for posts where solo founders complain about Stripe billing edge cases. Pull the top 25, group by complaint pattern, surface the top three with example URLs.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Claude calls &lt;code&gt;linkedin_search&lt;/code&gt;, gets back markdown, synthesizes. The synthesis step is where the value compounds. Claude is much better at clustering complaints than I am at reading 25 posts in a row.&lt;/p&gt;

&lt;p&gt;Two prompt patterns that work consistently for me:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;"Group by pattern, then surface 3."&lt;/strong&gt; Claude is bad at "summarize this," good at "cluster then rank." Asking for clustering first gives you a much more useful synthesis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;"Quote the most concrete one."&lt;/strong&gt; Adding &lt;em&gt;"include one literal quote from the most concrete-sounding post"&lt;/em&gt; at the end of the prompt forces Claude to actually pick a post rather than confabulate a synthesis. Quotes are a forcing function for groundedness.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;linkedin_feed&lt;/code&gt; (for taking the temperature)
&lt;/h3&gt;

&lt;p&gt;Pulls your algorithmic home feed. Good for "what is my actual network talking about today?", useful when you want a less-curated read. I don't use it for research; I use it for context-setting before I post.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;linkedin_post_comments&lt;/code&gt; (the underrated one)
&lt;/h3&gt;

&lt;p&gt;Reads the comment thread on a specific post URL. People underrate this because they think the post is the signal. The post is usually marketing. The signal is in the comments, where the post's audience says what &lt;em&gt;actually&lt;/em&gt; shipped vs. what was promised. If a founder posts &lt;em&gt;"we hit $1M ARR with 3 people,"&lt;/em&gt; the comments will sometimes have a former employee or a competitor adding important context. That's the data you want.&lt;/p&gt;

&lt;h2&gt;
  
  
  How SuperMCP keeps accounts under the radar
&lt;/h2&gt;

&lt;p&gt;This is the part I cared most about figuring out before shipping. After the cookie-paste shortcut taught me how LinkedIn's anti-abuse stack works, I designed SuperMCP around six practices that have kept my main account clean over the last 6 months of daily use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Real browser, not headless flag.&lt;/strong&gt; Playwright with &lt;code&gt;headless=False&lt;/code&gt; for the first session, then &lt;code&gt;headless=True&lt;/code&gt; in production. Some bot detection looks specifically at the &lt;code&gt;headless&lt;/code&gt; Chrome boolean.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One request at a time.&lt;/strong&gt; No parallelism. If Claude wants 25 results, I serve them one fetch at a time, with a 1.5–3s jitter between actions. LinkedIn cares more about pace than volume.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reuse the session.&lt;/strong&gt; Don't launch a new browser context per request. Reusing one warm session looks human; spawning fresh contexts looks scripted.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No people search.&lt;/strong&gt; Post search and feed reading are common user actions. People search at scale isn't, and LinkedIn flags it. I just don't ship the tool.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Daily budget cap, low default.&lt;/strong&gt; Free tier is 100 requests/day. That's about 3x normal human activity in a research session, well below anything LinkedIn flags as automation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bail fast on challenges.&lt;/strong&gt; If the page returns a captcha, a 2FA prompt, or even a soft "is this you?" check, SuperMCP stops the run, logs it, and backs off for 24 hours. Almost every flag escalation comes from tools that retry through challenges instead of standing down.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For context on why this matters: LinkedIn migrated their post composer from ProseMirror to Quill earlier this year and broke practically &lt;a href="https://dev.to/achiya-automation/linkedin-quietly-migrated-from-prosemirror-to-quill-and-broke-every-browser-automation-tool-that-4927"&gt;every browser-automation tool that touched the editor&lt;/a&gt;. They're actively updating selectors and fingerprints. A maintained tool will weather these changes. A frozen one won't.&lt;/p&gt;

&lt;h2&gt;
  
  
  How this stacks up against the alternatives
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Approach&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;th&gt;Setup&lt;/th&gt;
&lt;th&gt;Where it runs&lt;/th&gt;
&lt;th&gt;Weakness&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SuperMCP (LinkedIn MCP)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Free / $9 one-time&lt;/td&gt;
&lt;td&gt;&lt;code&gt;pip install&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Your laptop&lt;/td&gt;
&lt;td&gt;DOM changes break things; needs maintenance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LinkedIn Marketing API&lt;/td&gt;
&lt;td&gt;Free if approved&lt;/td&gt;
&lt;td&gt;Months of partner approval&lt;/td&gt;
&lt;td&gt;Your server&lt;/td&gt;
&lt;td&gt;Indie founders don't get approved&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phantombuster&lt;/td&gt;
&lt;td&gt;$69/mo&lt;/td&gt;
&lt;td&gt;Cookie hand-off to their cloud&lt;/td&gt;
&lt;td&gt;Their cloud&lt;/td&gt;
&lt;td&gt;Third party operates on your account&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Apify / Bright Data&lt;/td&gt;
&lt;td&gt;Pay per result&lt;/td&gt;
&lt;td&gt;Actor + budget&lt;/td&gt;
&lt;td&gt;Their cloud&lt;/td&gt;
&lt;td&gt;Costs unpredictable at scale&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DIY Selenium + cookies&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;td&gt;Days of selector wrangling&lt;/td&gt;
&lt;td&gt;Your laptop&lt;/td&gt;
&lt;td&gt;The naïve version of this is the cookie-paste shortcut. Breaks fast.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The honest case for SuperMCP isn't that it's strictly better than all of these. The Marketing API is the right answer if you can get approved. Phantombuster is the right answer if you don't want to think about it. SuperMCP is the right answer if you (a) can't get the API, (b) don't want to send your cookies to a third-party cloud, and (c) want it talking directly to Claude or Cursor instead of a separate dashboard.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;supermcp
supermcp setup           &lt;span class="c"&gt;# gets your API key, auto-installs Chromium&lt;/span&gt;
claude mcp add supermcp &lt;span class="nt"&gt;--&lt;/span&gt; supermcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cursor users, drop this in &lt;code&gt;settings.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"supermcp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"supermcp"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, ask Claude things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Find LinkedIn posts where someone is complaining about Stripe Checkout. Group by complaint, surface 3 with quotes.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Pull the top comments on this LinkedIn post: [URL]. Where do commenters disagree with the OP?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Cross-reference my LinkedIn feed against Reddit's r/SaaS. What's in both?&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The third one needs the Reddit MCP too, which is the same package.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I bundled this with Reddit, Twitter, and the rest
&lt;/h2&gt;

&lt;p&gt;Halfway through the LinkedIn build, I realized the same Playwright-with-persistent-state trick worked for Reddit (where the API is now paid and rate-limited), Twitter/X (where the cheapest tier is $200/mo), Medium, Dev.to, BlackHatWorld, and Google Trends/News. By the end of the second weekend I had 26 tools across 7 sources, all using the same auth pattern. The bundle is called SuperMCP. One install, all sources. Repo: &lt;a href="https://github.com/Bishwas-py/supermcp" rel="noopener noreferrer"&gt;github.com/Bishwas-py/supermcp&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you only need LinkedIn, you only call the LinkedIn tools. The other platforms don't activate unless you use them.&lt;/p&gt;




&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Is this safe for my LinkedIn account?&lt;/strong&gt;&lt;br&gt;
Yes, with the defaults SuperMCP ships. The free-tier rate cap (100 requests/day) sits well under any threshold LinkedIn flags as automation, and the Playwright setup uses your real browser fingerprint instead of a stripped-down &lt;code&gt;httpx&lt;/code&gt; request, so the traffic looks like a normal logged-in user. I've run this on my main account daily for 6 months without a flag. The one caveat: a brand-new LinkedIn account doing aggressive research will draw attention faster than an established one, so stick to the defaults until you have a feel for it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is this against LinkedIn's TOS?&lt;/strong&gt;&lt;br&gt;
LinkedIn's User Agreement prohibits "automated software" against the service. Same is true for X, Reddit, and Medium. The practical situation is that &lt;em&gt;every&lt;/em&gt; browser-automation tool (Phantombuster, Apify, this) operates in that gray zone, as does every personal-productivity Chrome extension you've installed. SuperMCP runs locally at human-scale rates with your own session. I'm not your lawyer. The repo has a TOS notice, use accordingly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does the Reddit / Twitter / Medium MCP need an API key?&lt;/strong&gt;&lt;br&gt;
No. Same Chrome-session trick. Reddit's API is now paid; Twitter's cheapest tier is $200/mo; Medium has no read API. All three work via your existing logged-in session.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Which AI tools support this?&lt;/strong&gt;&lt;br&gt;
Any MCP-compatible client: Claude Desktop, Claude Code, Cursor, Windsurf, Cline, GitHub Copilot Agent, Continue. SuperMCP is a standard stdio MCP server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where do I get an API key?&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;supermcp setup&lt;/code&gt; after &lt;code&gt;pip install supermcp&lt;/code&gt;. Free tier (100 requests/day) is automatic. Unlimited tier is $9 one-time at &lt;a href="https://webmatrices.com/supermcp" rel="noopener noreferrer"&gt;webmatrices.com/supermcp&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you build something on top of this, I'd love to see it. Drop an issue or a PR at &lt;a href="https://github.com/Bishwas-py/supermcp" rel="noopener noreferrer"&gt;github.com/Bishwas-py/supermcp&lt;/a&gt;. The LinkedIn-specific docs are at &lt;a href="https://webmatrices.com/supermcp/linkedin-mcp" rel="noopener noreferrer"&gt;webmatrices.com/supermcp/linkedin-mcp&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>automation</category>
      <category>claude</category>
      <category>mcp</category>
      <category>python</category>
    </item>
    <item>
      <title>I built a free backlink suggester for indie founders. Here is what 4 weeks of audits taught me.</title>
      <dc:creator>Bishwas Bhandari</dc:creator>
      <pubDate>Wed, 29 Apr 2026 06:27:13 +0000</pubDate>
      <link>https://forem.com/developerbishwas/i-built-a-free-backlink-suggester-for-indie-founders-here-is-what-4-weeks-of-audits-taught-me-33n</link>
      <guid>https://forem.com/developerbishwas/i-built-a-free-backlink-suggester-for-indie-founders-here-is-what-4-weeks-of-audits-taught-me-33n</guid>
      <description>&lt;p&gt;About a month ago I added a feature to Webmatrices called the Backlink Suggester. It is a guided form. A founder picks a category (guest posts, resource pages, competitor backlinks), describes their challenge, drops their site URL, and posts to the community. I review every one personally, and the community chimes in too. Free for the public version. $3 for a private audit if they want it kept off the public feed.&lt;/p&gt;

&lt;p&gt;The reason I built it was specific. Every week I was getting the same DM on Webmatrices. "Hey can you take a look at my site and tell me where I should be looking for backlinks." After the third or fourth time, I realized that question deserved a structured tool, not a one-off DM thread.&lt;/p&gt;

&lt;p&gt;Four weeks in, I have written audits for a Japanese name generator competing against Shopify's tool, a personal finance blog on Blogger trying to figure out why guest pitches kept failing, an F-1 visa interview prep tool with seven blog articles trying to find study-abroad backlinks, and a multi-niche utility platform with 90+ tools trying to figure out which cluster to lead with for backlink outreach. Different niches, different sets of advice, all starting from the actual specifics of the site instead of a generic checklist.&lt;/p&gt;

&lt;p&gt;This post is a writeup of what I learned, what surprised me, and the architecture of how it is built.&lt;/p&gt;

&lt;h2&gt;
  
  
  how it actually works
&lt;/h2&gt;

&lt;p&gt;The tool is a 5-step form. Pick a category (guest posts, resource pages, competitor backlinks). Pick the specific challenge inside that category. Tell me your situation. Pick what kind of feedback you want. Drop your URL. The free path goes public on the Webmatrices feed and the community plus me reply on it. The $3 path goes private, just the user and me.&lt;/p&gt;

&lt;p&gt;The dual tier exists because audits take real time. I write a 1000+ word reply per case. Scaling that without a small commercial backstop would not work, and I did not want the public version to disappear behind a paywall. The $3 number is intentional. Low enough that anyone who actually wants targeted advice can afford it. High enough to filter out drive-by submissions that would burn audit time.&lt;/p&gt;

&lt;h2&gt;
  
  
  the four patterns I keep seeing
&lt;/h2&gt;

&lt;p&gt;After about 20 audits, I have a working list of what founders consistently get wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "scattered niche" trap.&lt;/strong&gt; Multi-tool platforms cannot pitch as "a tools website" because editors do not know what category to put them in. The fix is pitching each cluster separately. SEO tools to digital marketing blogs. Islamic tools to Muslim lifestyle sites. Car calculators to automotive blogs. Same site, different angles. Most people do not realize this until I point it out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "pitch a stranger" assumption.&lt;/strong&gt; Cold outreach response rates have collapsed to about 4% on average across the industry. People keep trying harder at the thing that used to work. The actual move is community presence over broadcast outreach. Founders showing up on Reddit, Indie Hackers, and dev forums consistently for six months get backlinks naturally because the community starts citing them when relevant questions come up. The ones still cold-pitching get nothing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "directory submissions are tier 3" fossil.&lt;/strong&gt; Five years ago directories were considered low-quality filler. Today they are the easiest authority play for new sites because they are the one channel that does not depend on a human gatekeeper saying yes. One audit subject submitted to about 200 startup directories in his first month, hit DA 19 in 60 days, and that authority foundation was the reason his content started ranking nine months later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "competitor backlinks" rabbit hole.&lt;/strong&gt; People come in asking for full competitor backlink analysis but what they actually need is to identify the 3-5 sites their competitors get links from that are realistic for them to also approach. The framing matters more than the data dump. The Ahrefs free tier is enough for the question being asked.&lt;/p&gt;

&lt;h2&gt;
  
  
  what surprised me
&lt;/h2&gt;

&lt;p&gt;I expected the audits to be similar across niches. They are not. Every niche has its own unwritten rules, and the strategy that worked for a finance blog (HARO replacements + resource page outreach) is the wrong move for a Japanese name generator (anime and worldbuilding community placements) which is the wrong move for an F-1 visa prep tool (university .edu outreach + immigration consultant directories).&lt;/p&gt;

&lt;p&gt;I also expected the $3 tier to barely matter. It does. People who pay $3 are dramatically more likely to actually act on the advice. I think the $3 functions as a commitment device, not a price.&lt;/p&gt;

&lt;p&gt;The thing I did not expect: I am writing better audits because I review them publicly. Every public audit gets read by other community members and the next person submitting reads my last few audits before posting. That has changed how I write. The audits are more structured, more specific, and have less filler than they did when I was just sending one-off DMs.&lt;/p&gt;

&lt;h2&gt;
  
  
  what is on the roadmap
&lt;/h2&gt;

&lt;p&gt;Three things I want to add. None of them are urgent, all are obvious.&lt;/p&gt;

&lt;p&gt;A "search past audits" view, so people can browse audits in their niche before submitting their own. Right now the audits are scattered across the forum's &lt;code&gt;digi-work&lt;/code&gt; tag, which is mixed with other content. A dedicated index would help.&lt;/p&gt;

&lt;p&gt;A simple analytics view for premium users showing what the audit predicted vs what they actually did vs what their backlink count looked like 60 days later. Closing the feedback loop.&lt;/p&gt;

&lt;p&gt;A "second opinion" feature where another experienced community member can volunteer to review the same audit and offer a different angle. Right now I am the only one writing audits for premium subscribers. Adding peers would scale better.&lt;/p&gt;

&lt;h2&gt;
  
  
  the meta-point
&lt;/h2&gt;

&lt;p&gt;Indie devs and small SaaS founders have a backlink-help problem and the existing market is mostly $129/mo SaaS tools that show data but not strategy. The gap is human-to-human advice on specific situations. A community plus an admin who actually writes audits fills that gap at $3 instead of $129.&lt;/p&gt;

&lt;p&gt;I am not claiming this is going to scale to millions of users. It is more like a help desk that happens to be public. But the specific niche, "small founders who need targeted backlink advice they cannot get from a tool dashboard," is real and underserved.&lt;/p&gt;

&lt;p&gt;If you want to try it, the tool is at &lt;a href="https://webmatrices.com/backlink-suggester" rel="noopener noreferrer"&gt;webmatrices.com/backlink-suggester&lt;/a&gt;. Free version works without signup. Drop your context in. I will write you back, usually within a day.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>seo</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>I Stopped Fighting the Reddit API, So I Built SuperMCP to Read It for Me</title>
      <dc:creator>Bishwas Bhandari</dc:creator>
      <pubDate>Sat, 25 Apr 2026 13:15:34 +0000</pubDate>
      <link>https://forem.com/developerbishwas/i-stopped-fighting-the-reddit-api-so-i-built-supermcp-to-read-it-for-me-fe1</link>
      <guid>https://forem.com/developerbishwas/i-stopped-fighting-the-reddit-api-so-i-built-supermcp-to-read-it-for-me-fe1</guid>
      <description>&lt;p&gt;Last year I was trying to add Reddit research to my Claude Code workflow. Seemed straightforward. Then I spent an afternoon registering an OAuth app, managing token refresh, and watching my script hit rate limits after 10 requests. Tried PRAW next, same wall. Tried raw scraping, it broke within a week when Reddit changed their markup.&lt;/p&gt;

&lt;p&gt;I stepped back and thought: I'm already logged into Reddit in Chrome. Why can't my AI tools just use that session?&lt;/p&gt;

&lt;p&gt;That's how &lt;a href="https://pypi.org/project/supermcp/" rel="noopener noreferrer"&gt;SuperMCP&lt;/a&gt; started.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Actually Does
&lt;/h2&gt;

&lt;p&gt;SuperMCP is an MCP server (Model Context Protocol, Anthropic's open standard for connecting AI tools to data sources). It gives Claude, Cursor, Windsurf, or any MCP-compatible tool access to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reddit&lt;/strong&gt;: search posts, read full threads with comments, browse subreddits, check user activity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Twitter/X&lt;/strong&gt;: search tweets, get reply threads, pull user timelines&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Google Trends&lt;/strong&gt;: real-time trending topics by region&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Google News&lt;/strong&gt;: search articles, top headlines, topic-filtered news&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;13 tools total, all running on your machine as a local process.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trick: Your Chrome Login Session
&lt;/h2&gt;

&lt;p&gt;Here's what makes this different from every Reddit scraper tutorial on dev.to.&lt;/p&gt;

&lt;p&gt;SuperMCP reads cookies from your Chrome browser's local database, the same way password managers do. It spins up a headless Chromium instance that browses Reddit and Twitter as &lt;em&gt;you&lt;/em&gt;. You don't need API keys for Reddit or Twitter, don't need to register an OAuth app, don't need to deal with token refresh.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You (logged into Reddit in Chrome)
        ↓
Chrome's cookie database (on your disk)
        ↓
SuperMCP reads cookies locally
        ↓
Headless Chromium browses as you
        ↓
Results returned to Claude/Cursor via MCP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Google Trends and News use public RSS feeds directly. No login, no browser, instant results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your data never leaves your machine.&lt;/strong&gt; The only external call is API key validation to webmatrices.com.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup: 3 Commands
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;supermcp
supermcp setup          &lt;span class="c"&gt;# paste your API key, auto-installs Chromium&lt;/span&gt;
claude mcp add supermcp &lt;span class="nt"&gt;--&lt;/span&gt; supermcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. If you use &lt;code&gt;uvx&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uvx &lt;span class="nt"&gt;--from&lt;/span&gt; supermcp supermcp setup
claude mcp add supermcp &lt;span class="nt"&gt;--&lt;/span&gt; uvx &lt;span class="nt"&gt;--from&lt;/span&gt; supermcp supermcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Cursor, add to &lt;code&gt;.cursor/mcp.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"supermcp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"supermcp"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get your free API key at &lt;a href="https://webmatrices.com/supermcp" rel="noopener noreferrer"&gt;webmatrices.com/supermcp&lt;/a&gt;. 100 requests/day on the free tier, unlimited for a one-time $9.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Looks Like in Practice
&lt;/h2&gt;

&lt;p&gt;Once installed, you just talk to Claude normally. Here are prompts I use:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Market research before building a feature:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Search Reddit for posts about 'invoice automation for freelancers'. What are people actually complaining about?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Claude calls &lt;code&gt;reddit_search&lt;/code&gt;, pulls real threads with real comments, and summarizes the pain points. No copy-pasting URLs, no switching tabs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitoring competitor sentiment:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Search Twitter for mentions of [competitor] from the last week. What's the general sentiment?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Trend validation:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"What's trending on Google Trends in the US right now? Anything related to developer tools?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Deep-dive on a thread:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Get this Reddit post with all comments: [url]. Summarize the key takeaways and any tools people are recommending."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Content research:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"What's hot on r/SideProject this week? Any common themes?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The 13 Tools
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;What It Does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;reddit_search&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Search all of Reddit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;reddit_get_post&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Full post + comments&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;reddit_get_subreddit_posts&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Browse any subreddit (hot/new/top)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;reddit_search_subreddit&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Search within a subreddit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;reddit_get_user_activity&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;A user's recent posts &amp;amp; comments&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;twitter_search&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Search tweets&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;twitter_get_tweet&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tweet + full reply thread&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;twitter_get_user_tweets&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Recent tweets from any account&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;trends_get_trending&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Real-time trending by region&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;news_search&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Search Google News&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;news_top&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Top headlines&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;news_by_topic&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;News by category&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;trends_interest_by_region&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Regional interest for any term&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;(and constantly adding new MCP for linkedin, Medium, Dev.to, and other social platforms)&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Not Just Use the APIs Directly?
&lt;/h2&gt;

&lt;p&gt;I tried. Here's what happened:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reddit API&lt;/strong&gt;: Free tier is heavily rate-limited. The paid Data API charges per request, and it adds up fast when your AI agent makes dozens of calls per conversation. You also need to register an app, manage OAuth tokens, handle refresh flows. I got it working once, and then spent more time maintaining the auth than actually using the data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Twitter/X API&lt;/strong&gt;: The free tier only lets you &lt;em&gt;post&lt;/em&gt; tweets. Reading requires the Basic tier at $100/month. The Pro tier is $5,000/month. For a developer tool that just needs to search tweets? Absurd.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Trends&lt;/strong&gt;: No official API exists. The popular &lt;code&gt;pytrends&lt;/code&gt; library reverse-engineers Google's internal endpoints and breaks regularly.&lt;/p&gt;

&lt;p&gt;SuperMCP sidesteps all of this. You're logged into Reddit and Twitter in Chrome already. SuperMCP just uses that session.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Do I need Chrome open?&lt;/strong&gt;&lt;br&gt;
No. SuperMCP reads cookies from Chrome's database on disk. Chrome doesn't need to be running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does it work on macOS?&lt;/strong&gt;&lt;br&gt;
Yes. On first run, macOS will ask for your login keychain password to read Chrome's cookies. This is the standard macOS security prompt. Click "Always Allow" so it doesn't ask again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can I use this with tools other than Claude?&lt;/strong&gt;&lt;br&gt;
Any MCP client works. Claude Desktop, Claude Code, Cursor, Windsurf, Cline, and anything else that supports the MCP protocol.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What about Firefox or other browsers?&lt;/strong&gt;&lt;br&gt;
Chrome only for now. Firefox stores cookies differently and would need a separate integration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is this scraping?&lt;/strong&gt;&lt;br&gt;
SuperMCP browses Reddit and Twitter as you, using your authenticated session in a headless browser. It's equivalent to you opening a tab and reading the page yourself.&lt;/p&gt;




&lt;p&gt;PyPI: &lt;a href="https://pypi.org/project/supermcp" rel="noopener noreferrer"&gt;pypi.org/project/supermcp&lt;/a&gt; · Python 3.10+ · macOS / Windows / Linux&lt;/p&gt;

&lt;p&gt;If you're building with MCP servers, I'd be curious what data sources you wish you had access to. Drop a comment.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>showdev</category>
      <category>mcp</category>
    </item>
    <item>
      <title>Gemma4 vs Claude Code: I Tried the Switch. Here's What Broke First.</title>
      <dc:creator>Bishwas Bhandari</dc:creator>
      <pubDate>Mon, 20 Apr 2026 01:35:33 +0000</pubDate>
      <link>https://forem.com/developerbishwas/gemma4-vs-claude-code-i-tried-the-switch-heres-what-broke-first-3p07</link>
      <guid>https://forem.com/developerbishwas/gemma4-vs-claude-code-i-tried-the-switch-heres-what-broke-first-3p07</guid>
      <description>&lt;p&gt;Every few months someone drops a new open model and the local AI community collectively loses their minds. "This is the one. This kills the subscriptions." It happened with Llama 3. With Qwen. With Gemma 3.&lt;/p&gt;

&lt;p&gt;None of them actually did it.&lt;/p&gt;

&lt;p&gt;So when Gemma 4 landed and the numbers looked genuinely scary, I decided to stop theorizing and just try it — wired into my actual dev workflow, building actual software. Not chat. Not benchmarks. Shipping code.&lt;/p&gt;

&lt;p&gt;Here's what happened.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Made This Test Worth Running
&lt;/h2&gt;

&lt;p&gt;I'll be honest: I almost didn't bother. I'd been burned too many times by models that looked great in a playground and fell apart the second I asked them to do real work.&lt;/p&gt;

&lt;p&gt;But Gemma 4's numbers are different. &lt;strong&gt;31B parameters, #3 on Chatbot Arena, Apache 2.0 license, runs on a single GPU.&lt;/strong&gt; Someone got it running on a 6GB phone. A developer built an Android automation agent with it before the week was out. The 26B MoE variant hits 80–110 tokens per second on an RTX 3090.&lt;/p&gt;

&lt;p&gt;And here's the number that actually convinced me to test it: the τ²-bench agentic tool-use score. Gemma 3 scored &lt;strong&gt;6.6%&lt;/strong&gt; on that benchmark — meaning it failed 93 out of 100 tool calls. Basically useless as an agent. Gemma 4 31B scores &lt;strong&gt;86.4%&lt;/strong&gt; on the same test. That's not a marginal improvement. That's an entirely different model category.&lt;/p&gt;

&lt;p&gt;So I tested it. And I kept notes.&lt;/p&gt;




&lt;h2&gt;
  
  
  The First 4 Hours Are Genuinely Impressive
&lt;/h2&gt;

&lt;p&gt;Single-file edits? Fast and accurate. Writing fresh functions from scratch? Solid. It understood what I was asking, gave clean code, and didn't hallucinate imports.&lt;/p&gt;

&lt;p&gt;Speed alone made me want this to work. No API latency. No waiting. Just results.&lt;/p&gt;

&lt;p&gt;But then I gave it a real task.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where It Broke
&lt;/h2&gt;

&lt;p&gt;I asked it to refactor a module that touched 4 files. Nothing exotic — just a cleanup that involved renaming a function and updating its callers.&lt;/p&gt;

&lt;p&gt;Here's what happened:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;File 1&lt;/strong&gt;: Edited perfectly. I was impressed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;File 2&lt;/strong&gt;: Hallucinated the path. Generated changes to a file that didn't exist at that location.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;File 3&lt;/strong&gt;: Wrote code that called the function it had just deleted in File 1.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Classic context collapse. The model understood the task. It just couldn't hold the thread across multiple files under load.&lt;/p&gt;

&lt;p&gt;I went back to Claude Code. Same refactor. One shot. Done.&lt;/p&gt;

&lt;p&gt;That's the gap right now — and it's not about intelligence.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Tool Calling Problem Is Worse Than the Benchmarks Suggest
&lt;/h2&gt;

&lt;p&gt;Here's the thing about that 86.4% τ²-bench score: it's for the 31B Dense model. Most people running Gemma 4 on consumer hardware are using the &lt;strong&gt;26B MoE variant&lt;/strong&gt; — because it's faster and fits in less VRAM.&lt;/p&gt;

&lt;p&gt;That model scores &lt;strong&gt;68%&lt;/strong&gt; on the same agentic benchmark. Qwen's comparable variant scores 81%.&lt;/p&gt;

&lt;p&gt;In a 15-step workflow, a 68% tool-call success rate isn't a stat. It's a guarantee of failure somewhere in the middle.&lt;/p&gt;

&lt;p&gt;I ran a test that made this painfully obvious. I asked Gemma 4 to scaffold a SvelteKit project — search the web for the latest setup command, then build it. Simple, explicit instructions. It said it couldn't access the web. I pointed it at a GitHub link. It said it couldn't open URLs. I pointed it at an MCP tool that was &lt;strong&gt;literally already connected and listed in its available tools.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It asked me for clarification instead of using the tool.&lt;/p&gt;

&lt;p&gt;The same prompt, sent to a different model, returned a web search, a shell command, and a running project. No hand-holding needed.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(The side-by-side screenshots from the developer community testing this are wild — you can see both responses in our forum thread linked below. One says "I can't access that." The other just builds the thing.)&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  There's Hidden Performance That Hasn't Shipped Yet
&lt;/h2&gt;

&lt;p&gt;This is where it gets interesting.&lt;/p&gt;

&lt;p&gt;Some researchers digging through Gemma 4's model weights found &lt;strong&gt;undocumented multi-token prediction heads&lt;/strong&gt; baked into the architecture. Speculative decoding, essentially — just not officially enabled. Google confirmed the finding but said it's "not yet officially supported."&lt;/p&gt;

&lt;p&gt;So there's raw performance sitting in the model that nobody can use yet.&lt;/p&gt;

&lt;p&gt;If they release proper MTP support alongside tool-calling fixes in a point release, the numbers improve significantly. And that's before mentioning the Ollama bugs that are tripping people up on Apple Silicon right now — a streaming bug that routes tool-call responses to the wrong field, and a Flash Attention freeze on prompts over 500 tokens.&lt;/p&gt;

&lt;p&gt;These are fixable. The question is when.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Claude Code Actually Does That's Hard to Replace
&lt;/h2&gt;

&lt;p&gt;People frame this as a capability question. It's not. It's a reliability question.&lt;/p&gt;

&lt;p&gt;Claude Code isn't the best model on any single benchmark. What it does is boring things consistently — reads your files, understands what's already in your codebase, calls tools in the right sequence, and doesn't lose the thread halfway through a 50-step workflow.&lt;/p&gt;

&lt;p&gt;One developer I spoke with said it best: working with Gemma 4 on an existing codebase felt like pairing with a smart contractor who refuses to read the existing code. Technically correct suggestions, all of them wrong for &lt;em&gt;this&lt;/em&gt; project.&lt;/p&gt;

&lt;p&gt;That's not a parameter count problem. It's a training problem. And it's hard to fix.&lt;/p&gt;




&lt;h2&gt;
  
  
  So What Actually Happens to the Subscription Economy?
&lt;/h2&gt;

&lt;p&gt;Here's my actual prediction, not the polite one:&lt;/p&gt;

&lt;p&gt;Gemma 4 eats ChatGPT's casual usage hard. If you're using a subscription for summarizing stuff, answering questions, writing emails — Gemma 4 running locally is free and nearly as good. That category is gone.&lt;/p&gt;

&lt;p&gt;But agentic coding is a different product. Maintaining context across 50 files, calling 15 tools in sequence, not hallucinating paths at 2am when you're exhausted — that's not what benchmarks test, and it's not what Gemma 4 is reliably doing yet.&lt;/p&gt;

&lt;p&gt;The smart move right now is probably hybrid: &lt;strong&gt;Gemma 4 for the 80% of tasks that are fast and straightforward. Claude Code for the 20% that need depth and reliability.&lt;/strong&gt; Especially for enterprises with data residency requirements — Apache 2.0 means you can finally run a capable model entirely on your own infrastructure.&lt;/p&gt;

&lt;p&gt;The "one model" framing is wrong. It's becoming a stack.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Conversation Developers Are Actually Having
&lt;/h2&gt;

&lt;p&gt;I wrote a longer version of this take in our community forum and the responses were more honest than anything I've seen in a formal comparison. Real developers. Real workflows. Real failure modes.&lt;/p&gt;

&lt;p&gt;One person tried switching for a weekend and lasted 4 hours. Another found the MTP heads in the weights. Another showed the side-by-side screenshots of Gemma 4 refusing to use available tools while a different model just executed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That thread is here → &lt;a href="https://webmatrices.com/post/will-gemma-4-actually-replace-claude-code-or-are-we-lying-to-ourselves-again" rel="noopener noreferrer"&gt;webmatrices.com/post/will-gemma-4-actually-replace-claude-code&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Worth reading if you're thinking about making this switch — or if you already tried and want to compare notes.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Gemma 3 couldn't have this conversation. Gemma 4 makes it real. Gemma 5 might actually answer it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Have you wired Gemma 4 into a real dev workflow and tried to ship something? Not chat — actual coding work. Drop your experience in the comments or the forum thread. The honest data is in the field reports, not the leaderboard.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>webdev</category>
    </item>
    <item>
      <title>YouTube Mass Unsubscribe: My Brother Vibe Coded It in Class 11, I Refined It for Production</title>
      <dc:creator>Bishwas Bhandari</dc:creator>
      <pubDate>Mon, 23 Feb 2026 05:28:31 +0000</pubDate>
      <link>https://forem.com/developerbishwas/youtube-mass-unsubscribe-my-brother-vibe-coded-it-in-class-11-i-refined-it-for-production-2g22</link>
      <guid>https://forem.com/developerbishwas/youtube-mass-unsubscribe-my-brother-vibe-coded-it-in-class-11-i-refined-it-for-production-2g22</guid>
      <description>&lt;p&gt;My brother is 17 and in Class 11. He doesn't know what a REST endpoint is. He's never opened DevTools on purpose. Last month he built a Chrome extension that unsubscribes you from YouTube channels in bulk, and it worked.&lt;/p&gt;

&lt;p&gt;And it worked. It actually worked. He wanted to nuke his entire subscription list. 300+ channels accumulated over years of binge-watching gaming walkthroughs, tech reviews, and random stuff the algorithm tricked him into subscribing to at 2am. His feed had become completely unusable. Every time he opened YouTube, it was wall-to-wall content from channels he didn't even remember subscribing to.&lt;/p&gt;

&lt;p&gt;YouTube's solution? Click a channel. Click unsubscribe. Confirm. Go back. Scroll. Repeat. For 300 channels, that's close to a thousand clicks and an hour of mind-numbing repetition. He wasn't going to do that. So he decided to build something instead.&lt;/p&gt;

&lt;p&gt;He opened Claude Code, described what he wanted in plain English, and started prompting his way through a Chrome extension from scratch. No tutorial. No roadmap. No "learn Chrome Extensions in 30 days" course. Just a kid with a problem and an AI editor willing to help him figure it out.&lt;/p&gt;

&lt;p&gt;That's vibe coding. And I think it represents something genuinely new.&lt;/p&gt;

&lt;h2&gt;
  
  
  The DOM Clicking Approach
&lt;/h2&gt;

&lt;p&gt;His first version required you to visit &lt;code&gt;youtube.com/feed/channels&lt;/code&gt;, YouTube's subscription management page that lists every channel you're subscribed to. From there, the extension would scrape each channel element off the page, find the unsubscribe button, click it programmatically, confirm the modal, wait for the page to update, then move on to the next one. One channel at a time, clicking through the DOM like a very patient robot.&lt;/p&gt;

&lt;p&gt;It was slow. It was fragile. YouTube changes their DOM structure between A/B tests and his selectors would break randomly. Sometimes the confirmation modal wouldn't appear fast enough and the script would click behind it. Sometimes YouTube's lazy loading wouldn't trigger and the extension would run out of channels to process halfway down the page.&lt;/p&gt;

&lt;p&gt;But here's the thing. None of that mattered. He wasn't building a product. He was solving a problem. And the DOM clicking approach, janky as it was, successfully unsubscribed him from 200+ channels in about 40 minutes. Compare that to the 2+ hours of manual clicking he was facing.&lt;/p&gt;

&lt;p&gt;I'm genuinely proud of him for this. A 17-year-old in Class 11 with no formal programming background shipped a functional Chrome extension in an afternoon. Not a to-do app. Not a calculator. A browser extension that scrapes and interacts with one of the most complex web applications on the planet. He figured out content scripts, manifest permissions, DOM traversal, async timing. Concepts that trip up junior developers. The AI helped him through each wall as he hit it. That's the part people miss when they dismiss vibe coding.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Vibe Coding Actually Enables
&lt;/h2&gt;

&lt;p&gt;The standard criticism goes something like this: vibe coding produces bad code, people don't understand what they're building, and the result is unmaintainable garbage.&lt;/p&gt;

&lt;p&gt;All true. Also completely beside the point.&lt;/p&gt;

&lt;p&gt;My brother's DOM-clicking extension was objectively bad code. Hardcoded selectors, no error handling, sequential awaits with arbitrary timeout values, zero separation of concerns. If you ran it through a code review it would get rejected before the reviewer finished their coffee.&lt;/p&gt;

&lt;p&gt;But it shifted his relationship with software from consumer to creator. He went from "YouTube doesn't have this feature so I guess I'm stuck" to "YouTube doesn't have this feature so I'll build it myself." That mental shift is worth more than any CS fundamentals course.&lt;/p&gt;

&lt;p&gt;Vibe coding didn't teach him to program. It taught him that programming is an option. There's a massive difference, and I think it's the most important thing happening in tech education right now.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Refinement Gap
&lt;/h2&gt;

&lt;p&gt;After watching his extension click through DOM elements for 40 minutes, I spent an evening rewriting it. Not because his approach was wrong, but because I knew there was a faster path.&lt;/p&gt;

&lt;p&gt;YouTube's frontend talks to internal API endpoints for everything. Loading subscriptions, managing them, unsubscribing. The same endpoints the website hits when you manually click that unsubscribe button. Instead of simulating clicks on DOM elements, you can call these endpoints directly from the extension using the session token from the active YouTube tab.&lt;/p&gt;

&lt;p&gt;The difference is night and day. The DOM approach: scroll, find element, click, wait for modal, confirm, wait for response, scroll again. The endpoint approach: fire an authenticated POST request. No clicking, no scrolling, no modals, no waiting for DOM elements to render.&lt;/p&gt;

&lt;p&gt;Scanning 500+ subscriptions went from "scroll for five minutes while the page lazy-loads" to a single paginated API call that returns everything in seconds. Unsubscribing went from 40 minutes of automated clicking to a few minutes of sequential API requests with a configurable delay to avoid rate limits.&lt;/p&gt;

&lt;p&gt;This is where the vibe coding conversation gets interesting. My brother built the proof of concept. I built the production version. Both were necessary. Without his janky DOM-clicking prototype, I wouldn't have spent an evening on this at all. The vibe-coded version validated the idea. The refined version made it reliable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 12x Cost Problem, Revisited
&lt;/h2&gt;

&lt;p&gt;I've &lt;a href="https://webmatrices.com/post/vibe-coding-has-a-12x-cost-problem-maintainers-are-done" rel="noopener noreferrer"&gt;written before&lt;/a&gt; about how vibe coding carries hidden costs. Code that works but can't scale, technical debt that compounds, solutions that break under edge cases the AI never considered.&lt;/p&gt;

&lt;p&gt;This extension is a perfect case study. The DOM version worked but had real problems. Channels with identical names confused the selector logic. Session expiry mid-process caused silent failures. YouTube's A/B testing meant the extension might work on Monday and break on Wednesday.&lt;/p&gt;

&lt;p&gt;The endpoint version handles all of this cleanly. Channel IDs instead of display names. Auth token validation before each request. No dependency on DOM structure at all.&lt;/p&gt;

&lt;p&gt;But here's where I've softened my stance: the cost problem matters for production software. For personal tools, prototypes, and "I just need this to work once" utilities? Vibe coding is genuinely magical. A 17-year-old solved a real problem in an afternoon. The code quality was irrelevant because the code's job was to run once and save him two hours of clicking.&lt;/p&gt;

&lt;p&gt;The mistake isn't vibe coding. The mistake is not knowing when vibe coding is enough and when it needs a human with context to step in and refine.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's In The Extension Now
&lt;/h2&gt;

&lt;p&gt;The production version has the stuff you'd expect from a proper tool: one-click scanning of all subscriptions via API, a searchable list with avatars and subscriber counts, select/deselect all, CSV export for backup (critical since YouTube has no bulk re-subscribe), configurable delay between unsubscribes, live progress tracking per channel, and a stop button to pause mid-process.&lt;/p&gt;

&lt;p&gt;It runs 100% locally. No data collection, no background processes, no server calls. The extension talks to YouTube through your authenticated tab and nothing else.&lt;/p&gt;

&lt;p&gt;If your subscription list has gotten out of hand, it's free on the Chrome Web Store: &lt;a href="https://chromewebstore.google.com/detail/youtube-mass-unsubscribe/megojfaohpcepgfgjfnoepgalkelidpl" rel="noopener noreferrer"&gt;YouTube Mass Unsubscribe&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bigger Point
&lt;/h2&gt;

&lt;p&gt;My brother doesn't call himself a developer. He probably never will. But he now understands, at a gut level, that the software on his screen isn't magic. It's just code that someone wrote, and with the right tools, he can write it too.&lt;/p&gt;

&lt;p&gt;That's not a small thing. An entire generation is growing up with AI editors that translate intent into working software. Most of what they build will be rough. Some of it will be broken. Almost none of it will be production-quality.&lt;/p&gt;

&lt;p&gt;And it won't matter, my brother in christ. Because the gap between "I have an idea" and "I have a working prototype" just collapsed from months to hours. The refinement can come later. From them as they learn, or from someone else who sees the potential in their janky, beautiful, vibe-coded proof of concept.&lt;/p&gt;

&lt;p&gt;That's what happened here. A kid built something imperfect, and it became something real.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built by Binay Bhandari and refined by Bishwas Bhandari at &lt;a href="https://webmatrices.com" rel="noopener noreferrer"&gt;Webmatrices&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>javascript</category>
    </item>
    <item>
      <title>I tracked 20 Reddit posts across 14 subreddits. Here's what actually drove traffic.</title>
      <dc:creator>Bishwas Bhandari</dc:creator>
      <pubDate>Sat, 31 Jan 2026 15:34:50 +0000</pubDate>
      <link>https://forem.com/developerbishwas/i-tracked-20-reddit-posts-across-14-subreddits-heres-what-actually-drove-traffic-2a27</link>
      <guid>https://forem.com/developerbishwas/i-tracked-20-reddit-posts-across-14-subreddits-heres-what-actually-drove-traffic-2a27</guid>
      <description>&lt;p&gt;Last month I ran an experiment. Posted 20 times across 14 subreddits to see what actually works for driving traffic to technical content.&lt;/p&gt;

&lt;p&gt;One post hit 226,000 views. Another got 0 upvotes and 44 hostile comments.&lt;/p&gt;

&lt;p&gt;Same person. Same week. Here's what made the difference.&lt;/p&gt;

&lt;h2&gt;
  
  
  The numbers
&lt;/h2&gt;

&lt;p&gt;From Reddit alone, in one week:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;100,000+ views&lt;/li&gt;
&lt;li&gt;15,595+ readers&lt;/li&gt;
&lt;li&gt;1,700 hours of read time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Best post: 600 upvotes, 94.5% approval, 120 comments on r/webdev.&lt;/p&gt;

&lt;p&gt;Worst post: 0 upvotes, 35% approval, 44 comments telling me I was wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  The one thing that mattered most
&lt;/h2&gt;

&lt;p&gt;Third-person framing.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Framing&lt;/th&gt;
&lt;th&gt;Example&lt;/th&gt;
&lt;th&gt;Result&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Third-person&lt;/td&gt;
&lt;td&gt;"someone actually calculated..."&lt;/td&gt;
&lt;td&gt;600 upvotes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;First-person&lt;/td&gt;
&lt;td&gt;"I made a contrarian analysis site"&lt;/td&gt;
&lt;td&gt;1 upvote&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Same content quality. 600x difference in results.&lt;/p&gt;

&lt;p&gt;Reddit trusts discoveries. Reddit distrusts self-promotion.&lt;/p&gt;

&lt;p&gt;"Found this breakdown" beats "I wrote this breakdown" every single time.&lt;/p&gt;

&lt;h2&gt;
  
  
  The post that hit 226K views
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Title:&lt;/strong&gt; "someone actually calculated the time cost of reviewing AI-generated PRs. the ratio is brutal"&lt;/p&gt;

&lt;p&gt;Why it worked:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Third-person frame ("someone calculated" not "I calculated")&lt;/li&gt;
&lt;li&gt;Specific number in title ("12x" ratio)&lt;/li&gt;
&lt;li&gt;Lowercase, casual tone&lt;/li&gt;
&lt;li&gt;Ended with a genuine question&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The top comment got 156 upvotes: "We need an AI that automatically closes vibe coded PRs.. let them fight"&lt;/p&gt;

&lt;p&gt;When your top comments are jokes agreeing with you, you've won.&lt;/p&gt;

&lt;h2&gt;
  
  
  The post that got destroyed
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Title:&lt;/strong&gt; "why would a company pay $1,500/year for SaaS when a dev can build it custom for $500?"&lt;/p&gt;

&lt;p&gt;Posted to r/Entrepreneur. Business owners, not developers.&lt;/p&gt;

&lt;p&gt;They didn't care about cost savings. They cared about reliability. I was speaking dev language to business people.&lt;/p&gt;

&lt;p&gt;44 comments. All hostile. 0 upvotes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Same article, different subreddits
&lt;/h2&gt;

&lt;p&gt;This surprised me most.&lt;/p&gt;

&lt;p&gt;Same "$599 Mac Mini" article:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Subreddit&lt;/th&gt;
&lt;th&gt;Upvotes&lt;/th&gt;
&lt;th&gt;Approval&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;r/ArtificialInteligence&lt;/td&gt;
&lt;td&gt;412&lt;/td&gt;
&lt;td&gt;81%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;r/vibecoding&lt;/td&gt;
&lt;td&gt;148&lt;/td&gt;
&lt;td&gt;69%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;264-point difference. 12% approval gap. Same content.&lt;/p&gt;

&lt;p&gt;r/ArtificialInteligence wanted debate. r/vibecoding was tired of hype.&lt;/p&gt;

&lt;p&gt;Subreddit culture matters more than content quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  The AI-writing disaster
&lt;/h2&gt;

&lt;p&gt;r/programming caught me.&lt;/p&gt;

&lt;p&gt;Top comment, 120 upvotes: "God I hate reading all these LLM-written blog posts"&lt;/p&gt;

&lt;p&gt;What they noticed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Em dashes (—) everywhere&lt;/li&gt;
&lt;li&gt;Short, choppy sentences&lt;/li&gt;
&lt;li&gt;Repetition of the same points&lt;/li&gt;
&lt;li&gt;Too much structure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you use AI assistance, r/programming will find out. And they will make it the top comment.&lt;/p&gt;

&lt;h2&gt;
  
  
  The formula
&lt;/h2&gt;

&lt;p&gt;After 20 posts, this is what consistently worked:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SPECIFIC NUMBER + THIRD-PERSON FRAME + GENUINE QUESTION = ENGAGEMENT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And this is what consistently failed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FIRST-PERSON + PROMOTIONAL TONE + WRONG AUDIENCE = DEATH
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The 97% approval trick
&lt;/h2&gt;

&lt;p&gt;One post hit 97% approval. Highest ever.&lt;/p&gt;

&lt;p&gt;What I did differently: gave free, detailed help in the comments.&lt;/p&gt;

&lt;p&gt;Someone asked for feedback. Instead of saying "check my article," I spent 500 words auditing their site. Found broken code, inconsistent info, specific issues.&lt;/p&gt;

&lt;p&gt;That comment got upvoted. Other people asked for help. I helped them too.&lt;/p&gt;

&lt;p&gt;Free value in comments builds more trust than the post itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Third-person framing&lt;/strong&gt; — "found this" beats "I made this"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Specific numbers&lt;/strong&gt; — "12x ratio" beats "takes longer"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Know your subreddit&lt;/strong&gt; — same content, wildly different results&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Genuine questions&lt;/strong&gt; — invites comments instead of judgment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Free value in comments&lt;/strong&gt; — the real trust builder&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I documented the whole thing with screenshots, the disasters, and the exact patterns that worked. If you want the full breakdown: &lt;a href="https://webmatrices.com/playbooks/the-subtle-art-of-reddit-marketing" rel="noopener noreferrer"&gt;The Subtle Art of Reddit Marketing&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or just take the formula above and test it yourself. The third-person framing alone changed everything for me.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>marketing</category>
      <category>beginners</category>
      <category>productivity</category>
    </item>
    <item>
      <title>I Analyzed 50+ SaaS Shutdowns. They All Made The Same Mistake.</title>
      <dc:creator>Bishwas Bhandari</dc:creator>
      <pubDate>Fri, 30 Jan 2026 13:24:36 +0000</pubDate>
      <link>https://forem.com/developerbishwas/i-analyzed-50-saas-shutdowns-they-all-made-the-same-mistake-eap</link>
      <guid>https://forem.com/developerbishwas/i-analyzed-50-saas-shutdowns-they-all-made-the-same-mistake-eap</guid>
      <description>&lt;p&gt;Spent the last few months reading shutdown posts. Not the advice posts. The actual failures.&lt;/p&gt;

&lt;p&gt;The pattern was always the same. And it wasnt what I expected.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers That Broke Me
&lt;/h2&gt;

&lt;p&gt;One founder got 2,600 free users in four months. Zero paying customers.&lt;/p&gt;

&lt;p&gt;Another validated for two years. Talked to companies. They all said theyd pay. Zero paying customers.&lt;/p&gt;

&lt;p&gt;20,000 users. 3,586 websites created. Shut down anyway.&lt;/p&gt;

&lt;p&gt;These founders didnt lack effort. They didnt build bad products. They validated the wrong thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stated Preference vs Revealed Preference
&lt;/h2&gt;

&lt;p&gt;Theres a concept in behavioral economics that explains everything.&lt;/p&gt;

&lt;p&gt;Stated preference is what people say theyll do. Revealed preference is what they actually do.&lt;/p&gt;

&lt;p&gt;When you ask "would you pay for this?" youre asking someone to simulate a future version of themselves making a purchasing decision. That simulation is wrong almost every time.&lt;/p&gt;

&lt;p&gt;One founder ran 300 beta tests. Video calls. Excited users. Tons of feedback. Launch day came. Nobody converted.&lt;/p&gt;

&lt;p&gt;His reflection was brutal. Interest does not equal willingness to pay.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Vanity Metrics Factory
&lt;/h2&gt;

&lt;p&gt;One founder described their free trial as a "vanity metrics factory." Free trials attract tire kickers, students doing research, and competitors poking around. The dashboard shows growth. The bank account shows nothing.&lt;/p&gt;

&lt;p&gt;This founder killed their free trial. Everyone told them it was suicide. Signups dropped 70%.&lt;/p&gt;

&lt;p&gt;Revenue went up 40%.&lt;/p&gt;

&lt;p&gt;The free trial was filtering IN the wrong people. Removing it filtered them OUT. Fewer users. More customers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Question That Changes Everything
&lt;/h2&gt;

&lt;p&gt;After reading all these shutdown posts I noticed one question that separated winners from losers.&lt;/p&gt;

&lt;p&gt;"What are you doing about this problem right now?"&lt;/p&gt;

&lt;p&gt;Listen to the answer.&lt;/p&gt;

&lt;p&gt;"Nothing, its fine." Not a real problem.&lt;/p&gt;

&lt;p&gt;"We complain about it sometimes." Awareness without action.&lt;/p&gt;

&lt;p&gt;"We have a spreadsheet that kind of works." Workaround. Good sign.&lt;/p&gt;

&lt;p&gt;"We hired a contractor to handle it manually." Spending money. Great sign.&lt;/p&gt;

&lt;p&gt;"We use [competitor] but it sucks." Active buyer. Best sign.&lt;/p&gt;

&lt;p&gt;The goal isnt finding people who acknowledge the problem. Its finding people already spending time or money to solve it imperfectly. Those people will pay you. Everyone else is being polite.&lt;/p&gt;

&lt;h2&gt;
  
  
  The $1 Filter
&lt;/h2&gt;

&lt;p&gt;The only real validation is a transaction. Not "I would pay." Not "sounds interesting." Money.&lt;/p&gt;

&lt;p&gt;One founder added a $1 paywall to their free trial. Conversion rate went from 3% to 41%.&lt;/p&gt;

&lt;p&gt;The $1 wasnt about revenue. It was about filtering. Anyone willing to pay $1 is categorically different from someone who isnt.&lt;/p&gt;

&lt;p&gt;This feels wrong to most developers. We want to build first, charge later. But products dont speak. Customers do. And they speak loudest with their wallets.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pain Hierarchy
&lt;/h2&gt;

&lt;p&gt;Not all problems are equal. Knowing which ones people will actually pay to solve before you build is everything.&lt;/p&gt;

&lt;p&gt;Are they aware of the problem? Are they actively searching for solutions? Do they have existing workarounds? Are they already spending money on something? Is there built in urgency? Who controls the budget?&lt;/p&gt;

&lt;p&gt;Most founders validate the first two levels and call it done. The money is in levels three through six.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vitamins vs Painkillers
&lt;/h2&gt;

&lt;p&gt;Most failed SaaS products are vitamins marketed as painkillers.&lt;/p&gt;

&lt;p&gt;The test is simple. If your product disappeared tomorrow, would customers notice within 24 hours? Would they actively seek a replacement? Would they pay more to get it back?&lt;/p&gt;

&lt;p&gt;If the answer is no, you built a vitamin.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Works
&lt;/h2&gt;

&lt;p&gt;The pattern of successful founders was clear.&lt;/p&gt;

&lt;p&gt;They charged early, often before the product was ready. They filtered aggressively, losing tire kickers on purpose. They found people with workarounds, not just people with problems. They talked to budget owners, not just pain experiencers. They built distribution before product.&lt;/p&gt;

&lt;p&gt;They treated signups as vanity. Only revenue counted.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Truth
&lt;/h2&gt;

&lt;p&gt;If youre reading this and feeling uncomfortable, good.&lt;/p&gt;

&lt;p&gt;The founders who posted these shutdown stories all had skills to build great products. They all worked hard. They all "validated."&lt;/p&gt;

&lt;p&gt;They all failed because they validated the wrong thing.&lt;/p&gt;




&lt;p&gt;I wrote a full playbook on this. Analyzed 50+ real shutdowns, extracted the patterns, built frameworks you can actually use.&lt;/p&gt;

&lt;p&gt;Chapter one is free. Covers the validation lie with more data and case studies.&lt;/p&gt;

&lt;p&gt;Chapters two through four cover the exact playbooks for filtering buyers, identifying real pain, and building distribution before product.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://webmatrices.com/playbooks/the-saas-validation-playbook" rel="noopener noreferrer"&gt;The SaaS Validation Playbook&lt;/a&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>founder</category>
      <category>saas</category>
      <category>startup</category>
    </item>
    <item>
      <title>Reddit to LLM.txt</title>
      <dc:creator>Bishwas Bhandari</dc:creator>
      <pubDate>Wed, 21 Jan 2026 14:00:42 +0000</pubDate>
      <link>https://forem.com/developerbishwas/reddit-to-llmtxt-21j7</link>
      <guid>https://forem.com/developerbishwas/reddit-to-llmtxt-21j7</guid>
      <description>&lt;p&gt;You found it. The perfect Reddit thread. 300+ comments of exactly the conversation you need for your AI project.&lt;/p&gt;

&lt;p&gt;You start copy-pasting. Five minutes in you realize you missed half the nested replies. You go back. You're trying to figure out which comment was responding to which. An hour later you have a document full of text with no structure and no idea who said what or when.&lt;/p&gt;

&lt;p&gt;this is the problem nobody talks about.&lt;/p&gt;

&lt;p&gt;Reddit is the best training data on the internet. Real conversations with actual nuance. But every way to extract it is broken. Manual copy-paste doesn't scale. Scrapers require a CS degree to configure. The API means writing code and dealing with rate limits. And even when you get the text out, it's just a wall of words. No hierarchy. Your model can't learn conversation structure from that.&lt;/p&gt;

&lt;p&gt;One click and you get a clean markdown file. The full post. Every comment including the nested ones. Proper hierarchy so your LLM can actually understand who was replying to whom. All the metadata. When you feed this to GPT or Claude or your fine-tuned model, it can parse the conversation structure instead of just seeing words in sequence.&lt;/p&gt;

&lt;p&gt;Everything runs in your browser. The extension reads Reddit's public API—the same data you'd see if you added .json to any URL. Nothing goes through our servers. Your data stays yours.&lt;/p&gt;

&lt;p&gt;Built for AI researchers creating training datasets. Developers fine-tuning on specific communities. Anyone who's ever thought "I wish I could just download this entire thread" and then spent an hour copy-pasting.&lt;/p&gt;

&lt;p&gt;this is exclusive to webmatrices members.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://webmatrices.com/reddit-to-llm" rel="noopener noreferrer"&gt;/reddit-to-llm&lt;/a&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>llm</category>
      <category>productivity</category>
    </item>
    <item>
      <title>"taste scales. slop doesn't." — best breakdown i've seen on AI coding economics</title>
      <dc:creator>Bishwas Bhandari</dc:creator>
      <pubDate>Mon, 19 Jan 2026 07:56:37 +0000</pubDate>
      <link>https://forem.com/developerbishwas/taste-scales-slop-doesnt-best-breakdown-ive-seen-on-ai-coding-economics-p09</link>
      <guid>https://forem.com/developerbishwas/taste-scales-slop-doesnt-best-breakdown-ive-seen-on-ai-coding-economics-p09</guid>
      <description>&lt;p&gt;25-35% of new code in large organizations is now AI-assisted.&lt;br&gt;
Everyone's shipping faster.&lt;br&gt;
But someone has to review that code. Maintain it. Debug it at 3am.&lt;br&gt;
Those people are burning out.&lt;/p&gt;

&lt;p&gt;The Math&lt;br&gt;
I tracked a typical AI-generated pull request:&lt;/p&gt;

&lt;p&gt;Contributor time: 7 minutes&lt;br&gt;
Maintainer time: 85 minutes&lt;br&gt;
Ratio: 12x&lt;/p&gt;

&lt;p&gt;And when you request changes? They feed your feedback to the AI and regenerate the whole thing. You're reviewing from scratch.&lt;br&gt;
One maintainer told me he's stopped reviewing PRs entirely: "I can't tell anymore which ones are real contributions and which are someone farming GitHub activity for their LinkedIn."&lt;/p&gt;

&lt;p&gt;The Security Problem&lt;br&gt;
Radware's threat intelligence team analyzed 500,000 code samples and found "synthetic vulnerabilities" — security flaws unique to AI-generated code.&lt;br&gt;
Key findings:&lt;/p&gt;

&lt;p&gt;AI errors are disproportionately high-severity (injection, auth bypass)&lt;br&gt;
"Hallucinated abstractions" — AI invents fake helper functions that look professional but are broken&lt;br&gt;
"Slopsquatting" — attackers register hallucinated package names with malicious payloads&lt;/p&gt;

&lt;p&gt;What This Means for Hiring&lt;br&gt;
New interview question: "Walk me through a bug you personally debugged in this code."&lt;br&gt;
If they can't explain trade-offs, they didn't write it.&lt;/p&gt;

&lt;p&gt;The developers who thrive won't be the ones who generate the most code.&lt;br&gt;
They'll be the ones who can tell the difference between code that compiles and code that belongs.&lt;br&gt;
Taste scales. Slop doesn't.&lt;/p&gt;

&lt;p&gt;Full breakdown in comments 👇&lt;/p&gt;

</description>
      <category>ai</category>
      <category>vibecoding</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Developers killed Tailwind. Not AI.</title>
      <dc:creator>Bishwas Bhandari</dc:creator>
      <pubDate>Wed, 14 Jan 2026 05:28:39 +0000</pubDate>
      <link>https://forem.com/developerbishwas/developers-killed-tailwind-not-ai-9dd</link>
      <guid>https://forem.com/developerbishwas/developers-killed-tailwind-not-ai-9dd</guid>
      <description>&lt;p&gt;We just don't want to admit it.&lt;/p&gt;

&lt;p&gt;AI is the easy scapegoat.&lt;br&gt;
"LLMs bypass the docs."&lt;br&gt;
"Cursor generates Tailwind without asking."&lt;br&gt;
"Nobody visits the website anymore."&lt;br&gt;
Sure. All true.&lt;br&gt;
But when was the last time you paid for something you could get for free?&lt;br&gt;
Exactly.&lt;/p&gt;

&lt;p&gt;Tailwind has 75 million downloads a month.&lt;br&gt;
They just fired 75% of their team.&lt;br&gt;
→ npm downloads: All-time high&lt;br&gt;
→ Revenue: Down 80%&lt;br&gt;
→ Runway: 6 months left&lt;br&gt;
Most popular CSS framework in history.&lt;br&gt;
Nearly bankrupt.&lt;br&gt;
Popularity is not a business model.&lt;/p&gt;

&lt;p&gt;Shadcn gave away the same components for free.&lt;br&gt;
Junior devs copy-pasted from CodePen for years.&lt;br&gt;
The "premium templates" moat was made of paper.&lt;br&gt;
AI didn't kill the business.&lt;br&gt;
AI just automated what we were already doing for free.&lt;/p&gt;

&lt;p&gt;And here's what really gets me:&lt;br&gt;
$1M+/year in sponsorships.&lt;br&gt;
Still struggling.&lt;br&gt;
Blender — the entire 3D software industry's backbone — runs on $3M/year.&lt;br&gt;
Tailwind needed a third of that to maintain... shorter class names?&lt;br&gt;
Something doesn't add up.&lt;/p&gt;

&lt;p&gt;They were hiring engineers at $275k.&lt;br&gt;
For a CSS library.&lt;br&gt;
That's not a sustainability problem.&lt;br&gt;
That's a spending problem.&lt;/p&gt;

&lt;p&gt;Now Google and Vercel suddenly show up with sponsorships.&lt;br&gt;
Where were they 6 months ago when the runway was burning?&lt;br&gt;
They didn't show up to save open source.&lt;br&gt;
They showed up for the PR headline.&lt;/p&gt;

&lt;p&gt;The math is brutal:&lt;br&gt;
Usage ≠ Revenue.&lt;br&gt;
Downloads ≠ Dollars.&lt;br&gt;
npm installs ≠ Survival.&lt;br&gt;
You can power half the internet and still go broke.&lt;/p&gt;

&lt;p&gt;Stack Overflow is dying.&lt;br&gt;
Tailwind almost died.&lt;br&gt;
Dev blogs are ghost towns.&lt;br&gt;
Any business that needs you to visit a website to make money?&lt;br&gt;
Already dead. AI just signed the death certificate.&lt;/p&gt;

&lt;p&gt;I use Tailwind every day.&lt;br&gt;
I let AI write half of it.&lt;br&gt;
I've never paid Tailwind a single rupee.&lt;br&gt;
Neither have you.&lt;/p&gt;

&lt;p&gt;So let's stop blaming AI.&lt;br&gt;
The mirror is right there.&lt;/p&gt;

&lt;p&gt;Who really killed Tailwind — the robots, or the culture that wants everything free?&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>tailwindcss</category>
    </item>
    <item>
      <title>AI generated code is just slop!</title>
      <dc:creator>Bishwas Bhandari</dc:creator>
      <pubDate>Tue, 11 Nov 2025 11:34:41 +0000</pubDate>
      <link>https://forem.com/developerbishwas/ai-generated-code-is-just-slop-2ph6</link>
      <guid>https://forem.com/developerbishwas/ai-generated-code-is-just-slop-2ph6</guid>
      <description>&lt;p&gt;SHlT, I am writing this...&lt;/p&gt;

&lt;p&gt;AI coding is so fking frustrating. There's literally no life in it.&lt;br&gt;
AI can't replace a human coder. Can't even replace a junior dev.&lt;br&gt;
Yeah yeah, I know - some dev sits there instructing the AI to write code and it kinda works. But here's what nobody wants to talk about:&lt;br&gt;
Who's taking responsibility when shit breaks? Who's guaranteeing that code actually works? Not the AI. You are.&lt;/p&gt;

&lt;p&gt;The AI-generated code? Pure slop. And we all know it.&lt;br&gt;
But here's the worst part - you get so used to this garbage that you don't even understand your own fking code anymore. I hate AI, but I'm accustomed to it. We all are.&lt;/p&gt;

&lt;p&gt;That's the real tragedy. We're all becoming prompt monkeys who can't even read what we just "wrote."&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
    </item>
    <item>
      <title>I rewrote Mermaid integration for Svelte 5 - Turns out everyone's been doing it wrong</title>
      <dc:creator>Bishwas Bhandari</dc:creator>
      <pubDate>Sun, 31 Aug 2025 16:02:21 +0000</pubDate>
      <link>https://forem.com/developerbishwas/i-rewrote-mermaid-integration-for-svelte-5-turns-out-everyones-been-doing-it-wrong-157p</link>
      <guid>https://forem.com/developerbishwas/i-rewrote-mermaid-integration-for-svelte-5-turns-out-everyones-been-doing-it-wrong-157p</guid>
      <description>&lt;p&gt;Two weeks ago I was building docs for a client project and needed some flowcharts. Simple, right? Wrong.&lt;/p&gt;

&lt;p&gt;Spent 3 hours fighting with existing Mermaid solutions. Configs breaking, themes not switching properly, SSR throwing fits. At one point I actually considered just drawing the diagrams in Figma and embedding images like some kind of barbarian.&lt;/p&gt;

&lt;p&gt;That's when it hit me - we've all been treating Mermaid like it's 2019. Import a library, fight with initialization, pray it works with your build system. But this is Svelte 5. We have runes now. Why are we still coding like peasants?&lt;/p&gt;

&lt;p&gt;So I built &lt;strong&gt;@friendofsvelte/mermaid&lt;/strong&gt; - the way diagrams should work in 2025.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight svelte"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;script&amp;gt;&lt;/span&gt;
  &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Mermaid&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@friendofsvelte/mermaid&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;architecture&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`graph TD
    A[Old way: Fight configs] --&amp;gt; B[Waste 3 hours]
    B --&amp;gt; C[Settle for broken solution]

    D[New way: Just works] --&amp;gt; E[Ship features]
    E --&amp;gt; F[Happy developers]`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/script&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;Mermaid&lt;/span&gt; &lt;span class="na"&gt;string=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;architecture&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Here's what I learned building this
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Everyone's overthinking Mermaid integration.&lt;/strong&gt; The existing solutions try to handle every possible edge case, every framework, every version. Result? Bloated APIs that feel foreign in Svelte.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Runes changed everything.&lt;/strong&gt; The internal state management is so clean now. What used to take 50 lines of reactive statements is now 5 lines of &lt;code&gt;$state&lt;/code&gt; and &lt;code&gt;$derived&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theming was the real pain point.&lt;/strong&gt; Most solutions make you fight CSS variables or override styles. This component just accepts a theme prop and handles the rest.&lt;/p&gt;

&lt;h3&gt;
  
  
  What this actually does
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Themes that switch instantly&lt;/strong&gt; - Dark mode, custom colors, no flicker&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proper TypeScript&lt;/strong&gt; - Full type safety, intelligent completion&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;All diagram types&lt;/strong&gt; - Flowcharts, sequence, Gantt, user journeys, the works&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error handling that helps&lt;/strong&gt; - Actually tells you what's wrong with your diagram syntax&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Responsive by default&lt;/strong&gt; - Works on phones without you thinking about it&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The controversial part
&lt;/h3&gt;

&lt;p&gt;Most Mermaid integrations are solving the wrong problem. They focus on compatibility and features instead of developer experience. But here's the thing - if your diagram component feels like work to use, you're less likely to document things properly.&lt;/p&gt;

&lt;p&gt;Good documentation tools should disappear. You should think about your content, not your tooling.&lt;/p&gt;

&lt;h3&gt;
  
  
  Real talk: Why this matters
&lt;/h3&gt;

&lt;p&gt;I've seen too many projects with terrible docs because the diagram tooling was a nightmare. Teams skip flowcharts, avoid sequence diagrams, and end up with docs that nobody understands.&lt;/p&gt;

&lt;p&gt;When adding diagrams is as easy as writing markdown, teams actually document things. And that makes better software.&lt;/p&gt;

&lt;h3&gt;
  
  
  Try it yourself
&lt;/h3&gt;

&lt;p&gt;Site: &lt;a href="https://mermaid-cjv.pages.dev/" rel="noopener noreferrer"&gt;https://mermaid-cjv.pages.dev/&lt;/a&gt;&lt;br&gt;
Install: &lt;code&gt;npm install @friendofsvelte/mermaid&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Svelte 5 only - because backward compatibility is the enemy of progress.&lt;/p&gt;

&lt;h3&gt;
  
  
  What I'm building next
&lt;/h3&gt;

&lt;p&gt;This is part of a bigger documentation toolkit I'm working on. Thinking about components for API docs, interactive code examples, maybe even live demo embedding. &lt;/p&gt;

&lt;p&gt;The goal? Make documentation so smooth that teams actually want to write it.&lt;/p&gt;

&lt;p&gt;What kind of diagrams are you adding to your projects? Anyone else frustrated with how complicated this stuff has become?&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>programming</category>
      <category>svelte</category>
    </item>
  </channel>
</rss>
