<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Jim L</title>
    <description>The latest articles on Forem by Jim L (@jim_l_efc70c3a738e9f4baa7).</description>
    <link>https://forem.com/jim_l_efc70c3a738e9f4baa7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jim_l_efc70c3a738e9f4baa7"/>
    <language>en</language>
    <item>
      <title>5 + 1 Indie Web Projects I Built Solo in 2026 (AI Tools, Finance, Pet Care)</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Sat, 18 Apr 2026 02:23:29 +0000</pubDate>
      <link>https://forem.com/jim_l_efc70c3a738e9f4baa7/5-1-indie-web-projects-i-built-solo-in-2026-ai-tools-finance-pet-care-236g</link>
      <guid>https://forem.com/jim_l_efc70c3a738e9f4baa7/5-1-indie-web-projects-i-built-solo-in-2026-ai-tools-finance-pet-care-236g</guid>
      <description>&lt;h2&gt;
  
  
  5 + 1 Indie Web Projects I Built Solo in 2026
&lt;/h2&gt;

&lt;p&gt;I'm Jim Liu, an independent developer based in Melbourne. Here are six side projects I've shipped solo over the last twelve months. None of them are VC-funded, all of them live under my own domain, and most were written on nights and weekends after day-job hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. OpenAI Tools Hub — honest AI tool reviews
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.openaitoolshub.org" rel="noopener noreferrer"&gt;OpenAI Tools Hub&lt;/a&gt; is an opinionated review site for the current wave of AI coding and writing tools: Claude Code, Cursor, GitHub Copilot, ChatGPT, Windsurf, Warp, and a few dozen more. Each review has a "how we tested" section, a pricing table pulled monthly, and a short "who should skip this" paragraph — because not every tool is worth its ticket price.&lt;/p&gt;

&lt;p&gt;It also ships ~36 free developer tools (LLM latency comparator, Claude-skills marketplace comparison, prompt cost calculator) at &lt;a href="https://www.openaitoolshub.org/tools" rel="noopener noreferrer"&gt;openaitoolshub.org/tools&lt;/a&gt;. Next.js 16, Cloudflare Workers, and a heavy bias against AI-generated listicles.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. SubSaver — save money on subscriptions
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://subsaver.click" rel="noopener noreferrer"&gt;SubSaver&lt;/a&gt; compares subscription prices across 30+ services (Netflix, Spotify, ChatGPT Plus, YouTube Premium, NordVPN, Adobe Creative Cloud) and shows how to get them cheaper through family plan sharing, annual billing, promo codes, and verified shared plans.&lt;/p&gt;

&lt;p&gt;The core hypothesis: most people overpay for streaming and SaaS subscriptions by 40–70%. My job is to show the math, not to sell anyone a scheme.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. LowRiskTradeSmart — low-risk trading research
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.lowrisktradesmart.org" rel="noopener noreferrer"&gt;LowRiskTradeSmart&lt;/a&gt; focuses on covered calls, LOF premium arbitrage, and Hong Kong bond yield analysis. Its &lt;a href="https://www.lowrisktradesmart.org/tools/hk-bond-yield-comparator" rel="noopener noreferrer"&gt;HK bond yield comparator&lt;/a&gt; calculates 2026 iBond, Silver Bond, and Green Bond interactive yields from HKMA data. Multi-locale (EN / zh-CN / zh-HK).&lt;/p&gt;

&lt;h2&gt;
  
  
  4. AlphaGainDaily — covered-call and yield ETF insights
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://alphagaindaily.com" rel="noopener noreferrer"&gt;AlphaGainDaily&lt;/a&gt; tracks high-yield ETFs (YBTC / BCCC / BTCI / BAGY) and the newer wave of Bitcoin covered-call funds, including the Goldman Sachs Bitcoin Premium Income ETF (filed April 14, 2026). Plain-English weekly alpha analysis, no crypto moon-speak.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. LevelWalks — puzzle game walkthroughs
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://levelwalks.com" rel="noopener noreferrer"&gt;LevelWalks&lt;/a&gt; is my lightest project — step-by-step walkthroughs and level guides for mobile puzzle games. Aimed at casual players who want a nudge, not a full solution leaked. Fun to build, occasionally catches a small burst of Google traffic when a new game trends.&lt;/p&gt;




&lt;h2&gt;
  
  
  +1: PawAI Hub — free AI tools for pet owners
&lt;/h2&gt;

&lt;p&gt;Separately, I also run &lt;a href="https://pawaihub.com" rel="noopener noreferrer"&gt;PawAI Hub&lt;/a&gt; — a free hub of honest pet-care tools for dog and cat owners. It has a dog food calculator (calories by breed / weight / activity), a cat age in human years calculator that drops the "× 7" myth in favor of the actual biological curve, an AI breed identifier (photo → top 3 guesses), an emotion reader that parses body language from a photo, and a training Q&amp;amp;A backed by an LLM with cited sources.&lt;/p&gt;

&lt;p&gt;Built solo in the same stack (Next.js, Cloudflare Workers, D1, R2). No signup, no dark patterns.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why post this here?
&lt;/h2&gt;

&lt;p&gt;Because I see a lot of "portfolio indie hackers" posts on DEV and I've never written one for mine. If anything here resonates — especially the "review site with downsides admitted" approach from OpenAI Tools Hub, or the D1-runtime blog pipeline powering PawAI Hub — I'm happy to write a follow-up that goes deep on stack choices and what broke. Leave a comment and I'll pick the most-requested topic.&lt;/p&gt;

&lt;p&gt;Not looking for traffic. Just putting names to projects.&lt;/p&gt;

</description>
      <category>article</category>
    </item>
    <item>
      <title>5 Small Web Projects I Built as an Indie Developer in Melbourne</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Fri, 17 Apr 2026 02:35:18 +0000</pubDate>
      <link>https://forem.com/jim_l_efc70c3a738e9f4baa7/5-small-web-projects-i-built-as-an-indie-developer-in-melbourne-4c45</link>
      <guid>https://forem.com/jim_l_efc70c3a738e9f4baa7/5-small-web-projects-i-built-as-an-indie-developer-in-melbourne-4c45</guid>
      <description>&lt;p&gt;I'm Jim, a Melbourne-based indie developer. Over the last year I've shipped five small web projects across AI tools, subscription management, Hong Kong investing, crypto research, and daily puzzle games. Here's a quick walkthrough of each, what problem it solves, and what stack it uses.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. OpenAI Tools Hub
&lt;/h2&gt;

&lt;p&gt;A free directory of AI tools with honest reviews and head-to-head comparisons. Covers ChatGPT, Claude, Cursor, GitHub Copilot, Midjourney, and 50+ other tools. Built as a Next.js 15 app on a VPS, with a Supabase-backed blog pipeline so I can publish new comparisons without redeploying.&lt;/p&gt;

&lt;p&gt;Link: &lt;a href="https://openaitoolshub.org/" rel="noopener noreferrer"&gt;openaitoolshub.org&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. SubSaver
&lt;/h2&gt;

&lt;p&gt;A subscription manager that shows you how to save on Netflix, Spotify, ChatGPT Plus, YouTube Premium, NordVPN, and 30+ other subscriptions through family plans, annual billing, and verified shared plans. Runs on Cloudflare Workers via OpenNext, with Supabase Postgres for the content tables.&lt;/p&gt;

&lt;p&gt;Link: &lt;a href="https://subsaver.click/" rel="noopener noreferrer"&gt;subsaver.click&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Low Risk Trade Smart
&lt;/h2&gt;

&lt;p&gt;Hong Kong ETFs, HK IPO strategies (打新), LOF premium arbitrage, and low-risk trading guides for Asia-Pacific investors. Trilingual (English, Simplified Chinese, Cantonese). Also a Next.js site, with a DB-driven catch-all route for the multi-locale blog.&lt;/p&gt;

&lt;p&gt;Link: &lt;a href="https://lowrisktradesmart.org/" rel="noopener noreferrer"&gt;lowrisktradesmart.org&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. AlphaGain Daily
&lt;/h2&gt;

&lt;p&gt;Daily crypto news, DeFi updates, macro research, and long-term portfolio insights. Covers Bitcoin, Ethereum staking, Solana, and the major L1/L2 ecosystems. Prisma + Postgres for the news feed, and a lightweight editorial workflow so I can post while reading my feeds in the morning.&lt;/p&gt;

&lt;p&gt;Link: &lt;a href="https://alphagaindaily.com/" rel="noopener noreferrer"&gt;alphagaindaily.com&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. LevelWalks
&lt;/h2&gt;

&lt;p&gt;A free daily puzzle and brain training platform featuring logic grid puzzles, word games, sudoku, nonogram, and a MindSort solitaire variant I built for seniors in my family. Cloudflare Pages + static JSON for levels and blog posts. Zero runtime server, which keeps latency and cost low.&lt;/p&gt;

&lt;p&gt;Link: &lt;a href="https://levelwalks.com/" rel="noopener noreferrer"&gt;levelwalks.com&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;I'd love feedback on any of these — especially from other indie builders. What stacks are you running, and what do you wish I'd built differently?&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How I Fixed 3 Cannibalizing Blog Pages — Real GSC Data + Next.js Fix</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Fri, 17 Apr 2026 01:41:22 +0000</pubDate>
      <link>https://forem.com/jim_l_efc70c3a738e9f4baa7/how-i-fixed-3-cannibalizing-blog-pages-real-gsc-data-nextjs-fix-4f7b</link>
      <guid>https://forem.com/jim_l_efc70c3a738e9f4baa7/how-i-fixed-3-cannibalizing-blog-pages-real-gsc-data-nextjs-fix-4f7b</guid>
      <description>&lt;p&gt;Google Search Console flagged something odd on one of my Next.js blogs this week: three different pages were all ranking for the same keyword at positions 5 to 8 — but not one of them had a single click.&lt;/p&gt;

&lt;p&gt;That is textbook keyword cannibalization, and it took me about thirty minutes to fix. The part I found interesting is that the fix is almost entirely at the content layer, not the technical layer. Next.js already gives you the tools you need — the question is whether your frontmatter and internal linking are doing what they should.&lt;/p&gt;

&lt;p&gt;Here is the full walkthrough with the actual data.&lt;/p&gt;




&lt;h2&gt;
  
  
  What GSC Actually Showed
&lt;/h2&gt;

&lt;p&gt;Pulling the &lt;code&gt;queries&lt;/code&gt; report for the last 28 days and filtering to the problematic term, I got something like this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;URL slug&lt;/th&gt;
&lt;th&gt;Position&lt;/th&gt;
&lt;th&gt;Impressions&lt;/th&gt;
&lt;th&gt;Clicks&lt;/th&gt;
&lt;th&gt;CTR&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;page-a&lt;/td&gt;
&lt;td&gt;5.1&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;page-b&lt;/td&gt;
&lt;td&gt;5.2&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;page-c&lt;/td&gt;
&lt;td&gt;5.6&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For the same four-word query, Google was seeing three pages it thought were roughly equally relevant and none of them were clearly best. Searchers got confused, none of them clicked.&lt;/p&gt;

&lt;p&gt;Meanwhile the same three pages had &lt;em&gt;different&lt;/em&gt; keyword wins elsewhere:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;URL slug&lt;/th&gt;
&lt;th&gt;Winning KW&lt;/th&gt;
&lt;th&gt;CTR&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;page-b&lt;/td&gt;
&lt;td&gt;"X vs Y"&lt;/td&gt;
&lt;td&gt;9.1%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;page-c&lt;/td&gt;
&lt;td&gt;"Z vs Y"&lt;/td&gt;
&lt;td&gt;22.2%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;So the pages themselves were fine. Each had a specific comparison angle that was working. The problem was the shared, broader keyword — they were all undifferentiated on it, and Google could not decide which to rank.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Diagnosis (5 minutes)
&lt;/h2&gt;

&lt;p&gt;Open each page's frontmatter. Look at the title and description. Do any of them look nearly identical?&lt;/p&gt;

&lt;p&gt;Mine looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# page-a&lt;/span&gt;
&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Tools&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;W"&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;tools&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;in&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;compared.&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Y,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;W&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;fees,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;returns..."&lt;/span&gt;

&lt;span class="c1"&gt;# page-b&lt;/span&gt;
&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Tools&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;—&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;W&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Fees&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;and&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Returns&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Compared"&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;X&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;tools&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;comparison.&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.2-0.8%,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.35-0.65%..."&lt;/span&gt;

&lt;span class="c1"&gt;# page-c&lt;/span&gt;
&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Which&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;One&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Fits&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Your&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Style"&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;W&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;comparison.&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Fees&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.1-0.8%,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;honest&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;downsides..."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Page A and B were nearly identical. Both listed Y, Z, and W in the title. Google saw them as the same intent page. Page C was doing better on its specific 2-way compare term (9.1% CTR) but the description still mentioned W, which pulled it into the broader three-way competition.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix: Differentiate Intent, Don't Canonicalize
&lt;/h2&gt;

&lt;p&gt;The first instinct is to add &lt;code&gt;canonical&lt;/code&gt; meta pointing everything at one page. I decided against that for two reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The pages have &lt;strong&gt;different specific-term wins&lt;/strong&gt; (9.1% and 22.2% CTR on their own terms). Canonicalizing everything to page B would lose those.&lt;/li&gt;
&lt;li&gt;Once you canonical a page, Google treats it like a duplicate and may stop crawling it meaningfully. Reversible but not cheap.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Instead: &lt;strong&gt;differentiate the titles and descriptions to match different search intents&lt;/strong&gt;, and let internal linking consolidate topic authority on a pillar.&lt;/p&gt;

&lt;p&gt;Page A became beginner-focused (new to the space):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Beginners:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Y's&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Minimum,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Core,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;and&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;W&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Fund&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Smart"&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;First-time&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;X&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;in&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong?&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Y's&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;min,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Core,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;W&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Fund&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Smart&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;compared&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;beginners&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;—&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;plus&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;when&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;DIY&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;is&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;actually&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cheaper."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Page B became the pillar (canonical target for the broad term):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Comparison:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;W&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Fees,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Returns,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;and&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;MPF"&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;X&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;comparison,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;April&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;2026.&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.2-0.8%,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.35-0.65%,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;W&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.25-0.6%&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;—&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;full&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;fee&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;stack,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;MPF&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;integration..."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Page C stayed narrow (the 2-way compare winner):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Which&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;One&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Fits&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Your&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Investment&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Style"&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;head-to-head,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;April&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;2026.&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Fees&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;from&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.1-0.8%,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;real&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;portfolio&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;allocations,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ERAA&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Core&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;philosophy..."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the description change on page C — I removed &lt;code&gt;vs W&lt;/code&gt; from the description. That single change narrowed the search-intent match so the page stops competing for the broad term.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Internal Linking Piece
&lt;/h2&gt;

&lt;p&gt;Differentiated titles are only half. The pillar page (B) needs to accumulate topical authority from the satellite pages (A and C). So I added a Related Reading callout at the top of A and C:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gt"&gt;&amp;gt; **Want the full Hong Kong X landscape?** This article is a head-to-head between Y and Z only. For W added to the mix, see our [X Hong Kong Comparison](/en/blog/pillar-slug) pillar.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In Next.js markdown/MDX, this is just a standard link — &lt;code&gt;remark-gfm&lt;/code&gt; handles blockquotes, and the &lt;code&gt;&amp;lt;Link&amp;gt;&lt;/code&gt; component in your layout picks up internal URLs. No special config.&lt;/p&gt;

&lt;p&gt;Two reasons this matters more than most people think:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;It signals pillar intent to Google.&lt;/strong&gt; When satellite pages consistently link to a specific page as the "full" version, Google consolidates ranking signals there.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It improves UX.&lt;/strong&gt; Someone landing on page C who actually wanted the three-way compare now has one click to the pillar.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What I Did Not Do
&lt;/h2&gt;

&lt;p&gt;I did not:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Change slugs (costs 301 redirects and rankings).&lt;/li&gt;
&lt;li&gt;Add &lt;code&gt;rel=canonical&lt;/code&gt; across pages.&lt;/li&gt;
&lt;li&gt;Touch the sitemap.&lt;/li&gt;
&lt;li&gt;Request reindexing manually (IndexNow handled it automatically — more on that below).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The fix is &lt;strong&gt;title frontmatter + description frontmatter + one markdown callout per satellite page&lt;/strong&gt;. That is a 10-line diff per file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pushing the Fix Live
&lt;/h2&gt;

&lt;p&gt;Next.js blog, deployed via GitHub Actions to a VPS. The commit was three file edits. CI ran in 5 minutes 36 seconds. Page built, deployed, verified with curl.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; https://mysite.example.com/en/blog/pillar-slug | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-oE&lt;/span&gt; &lt;span class="s2"&gt;"Hong Kong Comparison"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Returns the new title. Good.&lt;/p&gt;

&lt;h2&gt;
  
  
  IndexNow for Fast Re-crawl
&lt;/h2&gt;

&lt;p&gt;One thing I did want: &lt;strong&gt;fast re-crawl&lt;/strong&gt;, because Google's existing cached version of those three pages still showed the old titles. If a searcher saw the stale cached result, they would click based on old framing. I wanted Google to refresh those specific URLs today, not in two weeks.&lt;/p&gt;

&lt;p&gt;IndexNow does this. It is a simple API supported by Bing, Yandex, and others (Google still does not endorse it but rumor has it they read the signals). The request is one POST with a key file at your root.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;

&lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;host&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mysite.example.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;YOUR_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;keyLocation&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://mysite.example.com/YOUR_KEY.txt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;urlList&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://mysite.example.com/en/blog/page-a&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://mysite.example.com/en/blog/pillar&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://mysite.example.com/en/blog/page-c&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;endpoint&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://api.indexnow.org/indexnow&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://www.bing.com/indexnow&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://yandex.com/indexnow&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;endpoint&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;endpoint&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three endpoints, three 200/202 responses, done in under a second. Bing typically re-crawls within 24-48 hours. In my experience, Googlebot follows Bingbot traffic spikes surprisingly closely, so the effect often shows up indirectly within a week.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Thing I Almost Missed
&lt;/h2&gt;

&lt;p&gt;Before shipping, I triple-checked one thing: the page with the 22.2% CTR on its specific 2-way compare term. That was the best-performing page on the whole site for that angle. Canonical-ing it, changing its slug, or even over-editing its title could destroy that win.&lt;/p&gt;

&lt;p&gt;So that page got &lt;strong&gt;zero changes&lt;/strong&gt; except the Related Reading callout at the top. Description stayed the same. Title stayed the same. I only changed the other two pages' titles to deflect the broad-term competition away from it.&lt;/p&gt;

&lt;p&gt;It is easy to over-engineer an SEO fix. Find the page that is working and leave it alone. Change the pages that are stealing its share.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results Timeline
&lt;/h2&gt;

&lt;p&gt;Position re-calibration on cannibalization fixes typically takes 2 to 4 weeks for Google to settle on which page wins for which intent. I will know by early May whether the pillar consolidates or whether the three pages re-split.&lt;/p&gt;

&lt;p&gt;What I am watching in GSC:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pillar page (B) impressions on the broad term — should &lt;strong&gt;go up&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Beginner page (A) and specific compare (C) impressions on the broad term — should &lt;strong&gt;go down&lt;/strong&gt; (by design)&lt;/li&gt;
&lt;li&gt;Specific compare (C) on its 2-way term — should stay flat or go up slightly&lt;/li&gt;
&lt;li&gt;Clicks on the pillar's CTR — should be the biggest win, from 0% to 2-5% range&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If all four move that direction, the fix worked. If the pillar's impressions drop instead, something else is wrong — either the title is too narrow now, or the internal links need stronger anchor text.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Takeaway for Next.js Devs
&lt;/h2&gt;

&lt;p&gt;Keyword cannibalization is almost always a content-layer problem masquerading as a technical one. Most stacks give you what you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;title&lt;/code&gt; and &lt;code&gt;description&lt;/code&gt; in frontmatter&lt;/li&gt;
&lt;li&gt;Internal linking via &lt;code&gt;&amp;lt;Link&amp;gt;&lt;/code&gt; or markdown&lt;/li&gt;
&lt;li&gt;Canonical URLs derived automatically from file path&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The work is in the audit and the differentiation, not in the code. Read your own descriptions out loud — if two of them are answering the same question with the same words, Google is going to think the same thing.&lt;/p&gt;

&lt;p&gt;Ten minutes of honest editing and your GSC report starts looking different in a month.&lt;/p&gt;

</description>
      <category>analytics</category>
      <category>nextjs</category>
      <category>tutorial</category>
      <category>webdev</category>
    </item>
    <item>
      <title>I Switched from LangGraph to Mastra for My TypeScript Agents — 18 Hours vs 41</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Thu, 16 Apr 2026 02:31:29 +0000</pubDate>
      <link>https://forem.com/jim_l_efc70c3a738e9f4baa7/i-switched-from-langgraph-to-mastra-for-my-typescript-agents-18-hours-vs-41-nah</link>
      <guid>https://forem.com/jim_l_efc70c3a738e9f4baa7/i-switched-from-langgraph-to-mastra-for-my-typescript-agents-18-hours-vs-41-nah</guid>
      <description>&lt;p&gt;I spent three weekends in February trying to get a LangChain/LangGraph agent working in a Next.js app. By Sunday night of the third weekend, I had 41 hours logged, a mass of Python-to-TypeScript bridge code, and an agent that completed about 87% of what I threw at it.&lt;/p&gt;

&lt;p&gt;Then a friend sent me a link to Mastra. Four days later, I had the same agent running natively in TypeScript. 18 hours total. No bridge code. No subprocess spawning. No serialization headaches between Python and my frontend.&lt;/p&gt;

&lt;p&gt;I want to talk about what actually changed and where the rough edges still are.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with Python agents in a TypeScript stack
&lt;/h2&gt;

&lt;p&gt;My project is a multi-step research agent — it takes a topic, searches several sources, cross-references findings, and produces a structured summary. Standard stuff. The architecture is Next.js frontend, Vercel deployment, Postgres for state.&lt;/p&gt;

&lt;p&gt;LangGraph is excellent software. The graph abstraction for agent workflows makes sense. But here's what nobody tells you upfront: if your entire stack is TypeScript, using a Python agent framework means you're now maintaining two runtimes, two dependency trees, two deployment pipelines, and a serialization layer between them.&lt;/p&gt;

&lt;p&gt;I tried the LangChain.js port first. It's always a few versions behind the Python original. Some features exist in docs but not in the npm package. I filed two issues that turned out to be "not yet ported from Python." The community examples are 90% Python. Stack Overflow answers are Python. The mental overhead of translating between the two languages while debugging agent logic was genuinely draining.&lt;/p&gt;

&lt;p&gt;So when I saw Mastra — TypeScript-native, built by the team that made Gatsby, YC-backed, sitting at around 22K GitHub stars — I figured it was worth a weekend experiment.&lt;/p&gt;

&lt;h2&gt;
  
  
  What switching actually looked like
&lt;/h2&gt;

&lt;p&gt;Mastra's mental model is closer to how I already think about TypeScript applications. You define agents as objects with tools, instructions, and a model. Tools are just typed functions. Workflows (their equivalent of LangGraph's graphs) use a step-based API that chains with &lt;code&gt;.then()&lt;/code&gt; and &lt;code&gt;.branch()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here's what surprised me: I didn't need to learn a new paradigm. The agent definition reads like a regular TypeScript module. The tools have Zod schemas for input/output validation — something I was already using everywhere else in the app. Type inference flows through the entire chain.&lt;/p&gt;

&lt;p&gt;Rewriting my research agent took about 12 hours. The remaining 6 hours were spent on the retrieval pipeline (Mastra has a built-in RAG module with chunking and embedding support) and testing.&lt;/p&gt;

&lt;p&gt;The part I dreaded most — the multi-step workflow where the agent decides which sources to query based on initial results — turned out to be simpler than the LangGraph version. In LangGraph, I had conditional edges between nodes, a state schema in TypedDict, and a routing function. In Mastra, it's a workflow with &lt;code&gt;.branch()&lt;/code&gt; that returns the next step name. Both work. The Mastra version is about 60% less code and doesn't require me to think in graph theory.&lt;/p&gt;

&lt;h2&gt;
  
  
  The numbers that actually mattered
&lt;/h2&gt;

&lt;p&gt;After running both implementations against my test suite of 200 research queries:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task completion rate&lt;/strong&gt;: Mastra agent hit 94.2% vs 87.4% with LangGraph. Some of this is probably down to me writing better tool definitions the second time around, so take the comparison with appropriate skepticism. But the type safety caught several edge cases during development that I'd missed in the Python version — malformed tool outputs that would silently pass in Python but threw compile-time errors in TypeScript.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;P95 latency&lt;/strong&gt;: 1,240ms (Mastra) vs 2,450ms (LangGraph). The LangGraph number includes the Python subprocess overhead and JSON serialization round-trips. Not a fair comparison of the frameworks themselves — more a reflection of what happens when you eliminate a language boundary. If you're running LangGraph in a pure Python backend, the gap would narrow considerably.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment&lt;/strong&gt;: This is where I felt the biggest quality-of-life jump. &lt;code&gt;vercel deploy&lt;/code&gt; and you're done. 90 seconds. No Docker container for a Python runtime. No Lambda layer for dependencies. No cold start penalty from spinning up a Python process. It's just a Next.js app with some extra API routes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Mastra is still rough
&lt;/h2&gt;

&lt;p&gt;I'd be dishonest if I didn't mention the gaps.&lt;/p&gt;

&lt;p&gt;The ecosystem is young. LangChain has integrations with seemingly everything — obscure vector databases, every LLM provider, dozens of document loaders. Mastra covers the major ones (OpenAI, Anthropic, Google, Pinecone, PGVector) but if you need something niche, you're writing a custom integration.&lt;/p&gt;

&lt;p&gt;Documentation has improved a lot since I started, but there are still areas where I had to read the source code. The workflow error handling section, in particular, could use more examples.&lt;/p&gt;

&lt;p&gt;The community is growing fast but it's a fraction of LangChain's. When I hit a problem at 11pm, there were maybe three relevant GitHub discussions. With LangChain, there would have been a dozen Stack Overflow threads.&lt;/p&gt;

&lt;p&gt;And the agentic patterns — reflection, planning, multi-agent orchestration — are less battle-tested. LangGraph has been used in production by hundreds of companies. Mastra is getting there, but the edge cases in complex multi-agent setups are still being discovered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who should actually consider switching
&lt;/h2&gt;

&lt;p&gt;If you're running a Python backend and LangGraph works for you, I see no reason to switch. The framework is mature and well-supported.&lt;/p&gt;

&lt;p&gt;But if you're in the situation I was in — TypeScript stack, deploying to Vercel or Cloudflare, tired of maintaining a Python sidecar just for your agent logic — Mastra removes a real and ongoing source of friction. The 23 hours I saved on initial setup will compound every time I add a new tool or modify a workflow, because I'm working in one language instead of two.&lt;/p&gt;

&lt;p&gt;I'm three months in now. The agent handles roughly 400 queries per day in production. I haven't regretted the switch.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Running TypeScript agents in production? I'm curious what framework you landed on and whether you hit similar Python-bridge problems. Drop a comment — genuinely want to compare notes.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>productivity</category>
      <category>typescript</category>
    </item>
    <item>
      <title>How I built a LOF arbitrage monitor for HK/CN ETFs (and what I learned about 'free' alpha)</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Tue, 14 Apr 2026 01:22:05 +0000</pubDate>
      <link>https://forem.com/jim_l_efc70c3a738e9f4baa7/how-i-built-a-lof-arbitrage-monitor-for-hkcn-etfs-and-what-i-learned-about-free-alpha-234d</link>
      <guid>https://forem.com/jim_l_efc70c3a738e9f4baa7/how-i-built-a-lof-arbitrage-monitor-for-hkcn-etfs-and-what-i-learned-about-free-alpha-234d</guid>
      <description>&lt;p&gt;I keep seeing the same question in HK/SG investor chats: &lt;em&gt;"the S&amp;amp;P 500 QDII ETF is trading 5% above NAV again — is this free money?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Short answer: not really. But the idea — that on-exchange ETF prices can drift from their net asset value — is real enough that I wanted a dashboard that just told me, every 15 minutes, which Chinese LOF/QDII ETFs were trading most disconnected from the underlying. So I built one.&lt;/p&gt;

&lt;p&gt;This is the boring-but-useful write-up: what a LOF is, why premiums happen, what the pipeline looks like, and the three things I got wrong on the first try.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's a LOF, quickly
&lt;/h2&gt;

&lt;p&gt;LOF = Listed Open-Ended Fund. It's a mutual fund wrapper that also trades on-exchange. QDII LOFs are the ones that hold offshore assets — S&amp;amp;P 500, Nasdaq, HK tech, gold miners, etc.&lt;/p&gt;

&lt;p&gt;The premium/discount mechanic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;NAV&lt;/strong&gt; is published once a day (T+1 for offshore QDII — you get yesterday's value tomorrow morning).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;On-exchange price&lt;/strong&gt; moves live during the trading day.&lt;/li&gt;
&lt;li&gt;When retail piles into, say, 华夏纳斯达克 after a big US overnight rally, the price can float well above the last-known NAV. That gap is the premium.&lt;/li&gt;
&lt;li&gt;In theory, the fund house can issue new units to arb it down. In practice, QDII quotas are capped, so premiums can persist for days.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So: premium ≠ free profit. It's mostly "the market is front-running tomorrow's NAV update." But &lt;em&gt;unusual&lt;/em&gt; premiums are worth watching, because that's where forced-selling and fat-finger trades show up.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pipeline
&lt;/h2&gt;

&lt;p&gt;Stack ended up boringly simple. Four moving parts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Eastmoney REST  ─┐
                 ├─► Python collector (every 15 min, cron)
Tencent REST  ──┘          │
                           ▼
                     SQLite (append-only)
                           │
                           ▼
                Next.js /tools/lof-premium (ISR 15min)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No Kafka, no Redis, no Airflow. It's a 200-line Python script and a static-ish Next.js page.&lt;/p&gt;

&lt;h3&gt;
  
  
  Collector
&lt;/h3&gt;

&lt;p&gt;The collector is two functions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;fetch_realtime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# 东财 push2 API, returns last price + bid/ask
&lt;/span&gt;    &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://push2.eastmoney.com/api/qt/stock/get?secid=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;secid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;&amp;amp;fields=f43,f60,f169,f170&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="bp"&gt;...&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;fetch_nav&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# 天天基金 fundgz API, returns "估值" (intra-day NAV estimate)
&lt;/span&gt;    &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://fundgz.1234567.com.cn/js/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.js&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="c1"&gt;# returns JSONP; strip the wrapper, json.loads the middle
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One gotcha that cost me an afternoon: &lt;code&gt;fundgz&lt;/code&gt; returns HTML on weekends and holidays (a friendly "市场休市" page) instead of the usual JSONP. First version crashed on every Saturday at 09:15 until I added a content-type check.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why not just use one source?
&lt;/h3&gt;

&lt;p&gt;East Money gives you price but not intra-day NAV estimate. Tiantian gives you NAV estimate but not L2 price. So you have to join them on the fund code. Cross-check also catches the case where one API starts returning stale data, which happens more than you'd think.&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage
&lt;/h3&gt;

&lt;p&gt;Single SQLite file, one row per (code, timestamp). Append-only. ~300 funds × 26 snapshots/day × 365 days = ~3M rows/year. SQLite eats that for breakfast.&lt;/p&gt;

&lt;p&gt;I briefly tried Postgres. Moved back to SQLite after two weeks because the entire deploy is a file copy and backups are &lt;code&gt;cp lof.db lof.db.bak&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Frontend
&lt;/h3&gt;

&lt;p&gt;Next.js 15, ISR with &lt;code&gt;revalidate: 900&lt;/code&gt;. The page is essentially a table sorted by absolute premium, with a tiny sparkline of the last 48 hours per fund.&lt;/p&gt;

&lt;p&gt;The sparkline was the part I over-engineered. First I pulled in a charting library (120KB), then I swapped it for a 40-line inline SVG component. Same visual, 3% of the bundle size.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three things I got wrong on the first try
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. I trusted the "premium" column on 东财
&lt;/h3&gt;

&lt;p&gt;The portal shows a premium column. It's computed off &lt;em&gt;yesterday's&lt;/em&gt; official NAV, not the intra-day estimate. For a QDII holding US stocks that rallied 2% overnight, "yesterday's NAV" understates the fund by 2% before the market even opens, so the premium column is systematically inflated on up days and depressed on down days.&lt;/p&gt;

&lt;p&gt;Using the estimated NAV instead (the one Tiantian publishes intra-day) cut the noise dramatically. The high-premium list used to be "whatever went up last night in the US." Now it's actually unusual positioning.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. I assumed 15-minute cadence was fine
&lt;/h3&gt;

&lt;p&gt;It mostly is. But around 09:30 and 14:57 (CN market open / close auction) the price moves 0.5–2% in a single minute. A 15-minute snapshot misses those.&lt;/p&gt;

&lt;p&gt;Compromise: 15-min during the day, 1-min windows around open/close auctions. &lt;code&gt;cron&lt;/code&gt; with two schedules.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. I forgot time zones, twice
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Tiantian returns Beijing time with no tz marker.&lt;/li&gt;
&lt;li&gt;East Money returns Unix timestamps in ms.&lt;/li&gt;
&lt;li&gt;My server runs UTC.&lt;/li&gt;
&lt;li&gt;My browser renders in Sydney time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First bug: charts were off by 8 hours. Second bug: I "fixed" it by hard-coding +8, then flew to Sydney, and everything shifted again.&lt;/p&gt;

&lt;p&gt;Final rule: store UTC in SQLite, tag Beijing explicitly at the API boundary, format to the browser's locale in the client. Boring, but it's the only approach that survives moving.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does the data actually give you alpha?
&lt;/h2&gt;

&lt;p&gt;Honestly — mostly no. Here's what a month of logs looks like in practice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;80% of the top-premium funds on any given day are just "US market gapped up overnight, retail is buying the reopen." By the time you see it, the arb is gone.&lt;/li&gt;
&lt;li&gt;15% are chronic premium funds — usually QDII with exhausted quota. You can't subscribe at NAV even if you wanted to. The premium is a structural access-fee, not mispricing.&lt;/li&gt;
&lt;li&gt;Maybe 5% are genuinely odd: a small-cap sector LOF that jumped on news nobody else was tracking, or a fund where the manager announced something that moved NAV estimate but not price yet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That 5% is the reason the dashboard exists. Not as a trading signal on its own, but as a "huh, why is this one weird?" attention filter.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd do differently if I rebuilt it today
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Push notifications instead of pull.&lt;/strong&gt; I still refresh the page. A Telegram bot that pings me when premium &amp;gt; 2σ would be 10x more useful.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Historical NAV backfill.&lt;/strong&gt; My DB starts from the day I deployed. If I'd backfilled 2 years from Tiantian's archive, regime comparisons ("is this premium unusual for &lt;em&gt;this fund&lt;/em&gt;?") would actually work.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Skip the live sparkline.&lt;/strong&gt; Nobody looks at it. A single "premium now vs 7-day avg" number would convey more.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary for the impatient
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;LOF premium = on-exchange price minus intra-day estimated NAV. Don't use the portal's published premium column; it's anchored on T-1 NAV.&lt;/li&gt;
&lt;li&gt;Two APIs, join on fund code, cross-check. SQLite is enough. 15-min cadence + 1-min around auctions.&lt;/li&gt;
&lt;li&gt;Most "premiums" are just timezone artifacts or quota constraints. The signal you want is the ~5% of funds that are genuinely priced weird today.&lt;/li&gt;
&lt;li&gt;Store UTC, tag at the boundary, format at render. Every time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you end up building something similar and hit a case I didn't cover — especially around holiday calendars for A-shares vs HK vs US simultaneously — I'd love to compare notes in the comments.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>data</category>
      <category>showdev</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>Karpathy's LLM Knowledge Base SEO: I applied the pattern for 12 months and here's what I learned</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Mon, 13 Apr 2026 02:48:41 +0000</pubDate>
      <link>https://forem.com/jim_l_efc70c3a738e9f4baa7/karpathys-llm-knowledge-base-x-seo-i-applied-the-pattern-for-12-months-and-heres-what-i-learned-51g9</link>
      <guid>https://forem.com/jim_l_efc70c3a738e9f4baa7/karpathys-llm-knowledge-base-x-seo-i-applied-the-pattern-for-12-months-and-heres-what-i-learned-51g9</guid>
      <description>&lt;h1&gt;
  
  
  Karpathy's LLM Knowledge Base × SEO: I applied the pattern for 12 months and here's what I learned
&lt;/h1&gt;

&lt;p&gt;On April 3, 2026, Andrej Karpathy posted a short but influential note about using LLMs to build personal knowledge bases. The premise: instead of RAG pipelines and vector databases, you manually clip raw sources into a &lt;code&gt;raw/&lt;/code&gt; folder, let an LLM distill them into structured wiki pages, and query the graph later with your LLM CLI of choice.&lt;/p&gt;

&lt;p&gt;No SaaS lock-in. No embeddings. No subscription. Just markdown and an LLM that knows the schema.&lt;/p&gt;

&lt;p&gt;I'd been drowning in scattered SEO research for a year — running openaitoolshub.org, an AI tools directory that's gone from DR 0 to DR 30 in 12 months, 126 articles, 130+ earned backlinks. My notes were spread across Notion, Kagi Assistant, local markdown files, a neglected Readwise Reader queue, and a thousand unread tabs. Karpathy's pattern gave me the discipline to consolidate everything into a single Obsidian vault that an LLM could maintain.&lt;/p&gt;

&lt;p&gt;This article walks through what I built, the key design decisions, and the one contradiction-preservation trick that changed how I think about personal knowledge bases entirely.&lt;/p&gt;

&lt;h2&gt;
  
  
  The five-step pattern
&lt;/h2&gt;

&lt;p&gt;Karpathy's original framing was simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Set up &lt;code&gt;raw/&lt;/code&gt;&lt;/strong&gt; — every source you encounter, unedited&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set up &lt;code&gt;wiki/&lt;/code&gt;&lt;/strong&gt; — structured concept pages the LLM maintains&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distill with an LLM&lt;/strong&gt; — run a pass where Claude/Codex/etc reads raw sources and updates wiki pages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-link with &lt;code&gt;[[wikilinks]]&lt;/code&gt;&lt;/strong&gt; — let the LLM suggest relationships between concepts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Query the graph with your CLI&lt;/strong&gt; — ask questions months later, get synthesized answers from the vault&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The genius is in step 3 — the LLM does the hard work of synthesis, contradiction detection, and cross-referencing. You do the reading and judgment calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I adapted it for SEO
&lt;/h2&gt;

&lt;p&gt;SEO is a moving target. What worked in Q4 2024 is wrong by Q2 2025. Google's March 2026 Core Update just rewrote half the playbook. I needed a system that could absorb new evidence and propagate updates without me manually re-reading every page.&lt;/p&gt;

&lt;p&gt;My vault structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;seo-obsidian/
├── Home.md                    # glassmorphism dashboard
├── CLAUDE.md                  # LLM operations guide
├── wiki/
│   ├── schema.md              # the concept-page template rulebook
│   ├── concepts/              # 12 SEO concept pages
│   ├── tools/                 # 3 tool profiles
│   ├── people/                # 1 person profile (Karpathy)
│   └── indexes/               # alphabetical catalogs
├── raw/
│   ├── README.md              # explains the three-layer architecture
│   ├── articles/              # long-form sources
│   └── practitioner-notes/    # curated short-form observations
└── maps/
    └── SEO-Domain-Map.canvas  # 21-node mind map
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every concept page follows a strict schema: &lt;code&gt;## TLDR&lt;/code&gt;, &lt;code&gt;## Key Points&lt;/code&gt;, &lt;code&gt;## Details&lt;/code&gt;, &lt;code&gt;## Applied Example&lt;/code&gt;, &lt;code&gt;## Related Concepts&lt;/code&gt;, &lt;code&gt;## Sources&lt;/code&gt;. The rigidity felt annoying at first, but it pays off at query time because Claude knows exactly where to look for each piece.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three design decisions worth discussing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Preserve contradictions instead of resolving them
&lt;/h3&gt;

&lt;p&gt;On April 10, Zhang Kai published a 602-prompt study claiming structured content (H2/bullets/tables) correlates with AI citation. On April 11, a Japanese SEO practitioner published experiments claiming structured data does NOT help AI understanding.&lt;/p&gt;

&lt;p&gt;In a traditional wiki I'd have to pick one. In the Karpathy pattern, both claims live in the vault. The Zhang Kai finding is in the main section of &lt;code&gt;geo-generative-engine-optimization.md&lt;/code&gt;. The Suzuki counter-evidence is in a &lt;code&gt;⚠️ Counter-Evidence&lt;/code&gt; callout right below it. When I query the vault with Claude, I get both cited.&lt;/p&gt;

&lt;p&gt;This is the single most important insight I took from applying the pattern: &lt;strong&gt;honest knowledge &amp;gt; confident answers&lt;/strong&gt;. The vault is a snapshot of the field's current state of confusion, not an attempt to pretend the confusion doesn't exist.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The ripple effect as the compounding mechanism
&lt;/h3&gt;

&lt;p&gt;When I add a new raw source, I don't manually update related concept pages. I tell Claude:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;claude
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Ingest raw/practitioner-notes/zhang-kai-602-prompt-geo-study.md 
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; following wiki/schema.md. Update all related concepts with the new 
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; evidence and flag any contradictions.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude then:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reads the new source&lt;/li&gt;
&lt;li&gt;Decides which of the 12 existing concept pages it affects&lt;/li&gt;
&lt;li&gt;Updates each one with the new evidence&lt;/li&gt;
&lt;li&gt;Flags contradictions against existing claims&lt;/li&gt;
&lt;li&gt;Updates the concept index&lt;/li&gt;
&lt;li&gt;Writes a log entry&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;One source → 5-15 pages updated → all in 45 seconds.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is what makes it compound. Most note-taking systems are linear (you add, you rarely re-read). This one is multiplicative — every new source makes the whole wiki incrementally smarter.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Strict concept-page schema &amp;gt; flexible notes
&lt;/h3&gt;

&lt;p&gt;I experimented with both. Flexible concept pages were easier to write but hell to query. Strict ones were slightly annoying to fill out but let Claude parse them reliably.&lt;/p&gt;

&lt;p&gt;The schema:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;aliases: []
tags: []
sources: []
cssclasses: [seo-brain-concept]

&lt;span class="gh"&gt;# Concept Title&lt;/span&gt;

&lt;span class="gu"&gt;## TLDR&lt;/span&gt;
One paragraph, 200-250 words. This is what AI engines cite.

&lt;span class="gu"&gt;## Key Points&lt;/span&gt;
5-8 bullet points.

&lt;span class="gu"&gt;## Details&lt;/span&gt;
The main content, 800-1500 words. Can have sub-sections.

&lt;span class="gu"&gt;## Applied Example&lt;/span&gt;
A concrete worked scenario.

&lt;span class="gu"&gt;## Related Concepts&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; [[concept-a]] — why it's related
&lt;span class="p"&gt;-&lt;/span&gt; [[concept-b]] — why it's related

&lt;span class="gu"&gt;## Sources&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; External URLs
&lt;span class="p"&gt;-&lt;/span&gt; raw/... paths
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every single concept page follows this. It's like a database schema — restrictive, but queryable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three concrete SEO insights that came out of the exercise
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Insight 1 — Mean AI-cited content length is 1,375 characters
&lt;/h3&gt;

&lt;p&gt;Zhang Kai's study measured the length of every fragment cited by ChatGPT, Perplexity, and Google AI Overview across 602 prompts. The mean was 1,375 characters — roughly 200-250 words, or about 10 sentences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical implication&lt;/strong&gt;: write TL;DR blocks of 200-250 words near the top of every article. Break the body into H2-bounded sections of 1,000-1,500 characters. That's the GEO sweet spot for citation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Insight 2 — Google's March 2026 Core Update targets 7 specific AI writing patterns
&lt;/h3&gt;

&lt;p&gt;Kill these and your content survives:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;"Not just X, but Y" constructions&lt;/li&gt;
&lt;li&gt;Em-dash overuse&lt;/li&gt;
&lt;li&gt;Triad lists ("powerful, elegant, and fast")&lt;/li&gt;
&lt;li&gt;Formulaic openers ("In today's fast-paced world...")&lt;/li&gt;
&lt;li&gt;Breathless enthusiasm ("game-changing")&lt;/li&gt;
&lt;li&gt;False-authority hedging ("It's worth noting that...")&lt;/li&gt;
&lt;li&gt;Broad-to-narrow openings&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I went through every article on openaitoolshub.org and stripped these patterns. Traffic stabilized. Articles that failed the update all shared these tells.&lt;/p&gt;

&lt;h3&gt;
  
  
  Insight 3 — Free dofollow directories above DR 55 exist
&lt;/h3&gt;

&lt;p&gt;Conventional wisdom says free directories are DR 0-10 and useless. Actual: I found at least 12 free dofollow directories above DR 55. A field study in early April showed that adding 50 such backlinks moved a DR 46 site to DR 50 in one week.&lt;/p&gt;

&lt;p&gt;The misconception comes from the early 2010s when directory submission was spammed to death. Post-2024, curated directories (Navs Site, Acid Tools, Ben's Bites, ShowMySites, NextGen Tools) are legitimate editorial sources.&lt;/p&gt;

&lt;h2&gt;
  
  
  What tools I used (and didn't use)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Used&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Obsidian (free) for the vault UI&lt;/li&gt;
&lt;li&gt;Claude Code for the distillation + query layer&lt;/li&gt;
&lt;li&gt;Ahrefs (~$99/month, but sem.3ue.com mirror for specific lookups)&lt;/li&gt;
&lt;li&gt;Google Search Console (free) — the most important SEO tool for indie devs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Explicitly NOT used&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No SEO course (they go stale)&lt;/li&gt;
&lt;li&gt;No paid link-building service (PBNs are a DMCA landmine)&lt;/li&gt;
&lt;li&gt;No vector database (the whole point of the Karpathy pattern is avoiding this)&lt;/li&gt;
&lt;li&gt;No subscription SaaS tools beyond Ahrefs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal was to keep the tool budget under $100/month and replace expensive tools with LLM-assisted workflows. Mostly worked.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;I packaged the vault as "SEO Brain" for other indie devs. Free 5-concept starter kit is at openaitoolshub.org/en/seo-brain (canonical source, no Medium paywall). Full 12-concept Starter Edition is on Gumroad, $19 launch week, $29 regular.&lt;/p&gt;

&lt;p&gt;More importantly — if you're doing personal research in &lt;em&gt;any&lt;/em&gt; domain, I think Karpathy's LLM KB pattern is the right structure for 2026. Try it with your own domain (investing research, game dev, climate science, whatever) and let me know what you learn.&lt;/p&gt;

&lt;p&gt;The compounding is real. The contradictions-preserved discipline is the trick.&lt;/p&gt;

&lt;h2&gt;
  
  
  About the author
&lt;/h2&gt;

&lt;p&gt;Jim runs openaitoolshub.org (DR 30, 126 articles, solo) and four sister sites covering trading, SaaS, AI tools, and game directories. He writes about applying indie dev patterns to SEO at his main site.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article's canonical version lives at &lt;a href="https://www.openaitoolshub.org/en/seo-brain" rel="noopener noreferrer"&gt;openaitoolshub.org/en/seo-brain&lt;/a&gt;. Dev.to is a syndication copy.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>productivity</category>
      <category>tooling</category>
    </item>
    <item>
      <title>I Tried Microsoft Agent Framework 1.0 — Three Days In, Here Is What I Think</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Fri, 10 Apr 2026 03:39:54 +0000</pubDate>
      <link>https://forem.com/jim_l_efc70c3a738e9f4baa7/i-tried-microsoft-agent-framework-10-three-days-in-here-is-what-i-think-jdp</link>
      <guid>https://forem.com/jim_l_efc70c3a738e9f4baa7/i-tried-microsoft-agent-framework-10-three-days-in-here-is-what-i-think-jdp</guid>
      <description>&lt;h2&gt;
  
  
  The Merge Nobody Asked For But Everyone Needed
&lt;/h2&gt;

&lt;p&gt;Microsoft released Agent Framework 1.0 on April 7. The pitch: one SDK that fuses Semantic Kernel (enterprise middleware, telemetry, type safety) with AutoGen (multi-agent chat orchestration). No more stitching two libraries together with duct tape.&lt;/p&gt;

&lt;p&gt;I spent three days testing it on real work instead of toy examples. Here is what I found.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Works
&lt;/h2&gt;

&lt;p&gt;The graph-based workflow engine is the star. You define agent relationships as a directed graph — orchestrator hands off to researcher, researcher passes to coder, coder sends to reviewer. Each agent keeps its own session state.&lt;/p&gt;

&lt;p&gt;I built a four-agent pipeline that parsed GitHub issues, drafted code, ran tests, and generated PR descriptions. Total setup: around 120 lines of Python. The DevUI debugger runs locally and shows real-time message flows between agents. I caught two infinite-loop bugs through it that would have burned through my API budget otherwise.&lt;/p&gt;

&lt;p&gt;MCP support landed on day one. My agents could call external tools through the Model Context Protocol without custom wrappers. I connected a filesystem server and a web search tool in maybe 15 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where It Falls Short
&lt;/h2&gt;

&lt;p&gt;Python support feels rushed. The .NET SDK is polished — types, middleware hooks, proper async. The Python package works but documentation has gaps, and some features like the evaluation framework are .NET-only for now. If you are a Python shop, expect to read source code more than docs.&lt;/p&gt;

&lt;p&gt;A2A (Agent-to-Agent protocol) is version 1.0 but the ecosystem is basically Microsoft talking to Microsoft right now. Cross-framework interop with LangChain or CrewAI agents is not there yet. Give it six months.&lt;/p&gt;

&lt;p&gt;Boilerplate is real. Setting up a simple two-agent chat requires more ceremony than LangGraph or Claude Agent SDK. Fine for enterprise teams with dedicated infra — overkill for a weekend prototype.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Stacks Up
&lt;/h2&gt;

&lt;p&gt;I wrote a &lt;a href="https://www.openaitoolshub.org/en/blog/microsoft-agent-framework-review" rel="noopener noreferrer"&gt;full breakdown comparing Microsoft Agent Framework against Claude Agent SDK, LangGraph, and CrewAI&lt;/a&gt; on my site with actual code examples and benchmark numbers.&lt;/p&gt;

&lt;p&gt;Short version: Agent Framework wins on enterprise features, Claude SDK wins on simplicity, LangGraph wins on flexibility. Pick based on where you are running production workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Should Care
&lt;/h2&gt;

&lt;p&gt;If your company already runs on Azure and uses Semantic Kernel, this is the obvious next step. The migration path from SK plugins to Agent Framework tools is nearly 1:1.&lt;/p&gt;

&lt;p&gt;If you are an indie developer testing the waters, I would start with Claude Agent SDK or LangGraph first. Lower friction, faster prototyping. Come back to Microsoft Agent Framework when you need enterprise observability or graph-based multi-agent workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Setup
&lt;/h2&gt;

&lt;p&gt;I tested on Python 3.12, WSL2 Ubuntu, with GPT-4.1 and Claude Opus as backend models. Cost for three days of experimentation: roughly $14 in API calls. The DevUI runs locally on port 5000 and uses about 200MB of RAM.&lt;/p&gt;

&lt;p&gt;One thing I appreciated: the framework does not force you into Azure. You can use any OpenAI-compatible endpoint, local models through Ollama, or Anthropic directly. The Azure AI Foundry integration is optional, not required.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;Microsoft Agent Framework fills a gap that has existed since enterprises started asking "how do I put AutoGen in production?" The answer: merge it with your enterprise middleware, add proper observability, ship it.&lt;/p&gt;

&lt;p&gt;Not revolutionary. But solid engineering that solves a real problem for a specific audience. Which is probably the more valuable outcome anyway.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>automation</category>
      <category>microsoft</category>
    </item>
    <item>
      <title>Google Just Showed Us What Happens When You Throw Out the Token-by-Token Playbook</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Thu, 09 Apr 2026 03:35:06 +0000</pubDate>
      <link>https://forem.com/jim_l_efc70c3a738e9f4baa7/google-just-showed-us-what-happens-when-you-throw-out-the-token-by-token-playbook-59b5</link>
      <guid>https://forem.com/jim_l_efc70c3a738e9f4baa7/google-just-showed-us-what-happens-when-you-throw-out-the-token-by-token-playbook-59b5</guid>
      <description>&lt;p&gt;Every LLM you have used works the same way. It predicts the next token, then the next, then the next. One at a time. Autoregressive generation. That is how GPT-4o works, how Claude works, how Gemini 2.5 Pro works.&lt;/p&gt;

&lt;p&gt;Google just said: what if we stop doing that?&lt;/p&gt;

&lt;p&gt;Gemini Diffusion generates text the way image models generate images. Instead of building a sentence left to right, it starts with noise and refines the entire output simultaneously. The claimed speedup is 5x over comparable autoregressive models.&lt;/p&gt;

&lt;p&gt;I have been thinking about what this actually means for the way I build things.&lt;/p&gt;

&lt;h2&gt;
  
  
  The speed problem nobody talks about
&lt;/h2&gt;

&lt;p&gt;When you are making a single API call, the difference between 2 seconds and 0.4 seconds does not matter much. But when you are running batch jobs — processing 500 documents, generating test cases for an entire codebase, summarizing a week of customer support tickets — that 5x adds up fast.&lt;/p&gt;

&lt;p&gt;I run a lot of batch processing through Claude and GPT-4o. A typical overnight job hits the API maybe 2,000 times. At current speeds that takes roughly 3 hours. If diffusion models actually deliver on the 5x claim, that same job finishes in 36 minutes. That changes whether I can run it during lunch instead of overnight.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I am skeptical
&lt;/h2&gt;

&lt;p&gt;The 5x number comes from controlled benchmarks. Real-world performance depends on context length, output complexity, and how the model handles edge cases. I have seen plenty of impressive benchmark numbers that fall apart when you throw messy real data at them.&lt;/p&gt;

&lt;p&gt;Also, diffusion models for text are genuinely new territory. Image diffusion had years of iteration before it got reliable. Text diffusion is maybe six months into serious research. The failure modes are different — you can get away with a slightly wrong pixel in an image, but a slightly wrong word in code breaks everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I am actually going to do
&lt;/h2&gt;

&lt;p&gt;Nothing yet. I am not rewriting any pipelines around a model that is still in research preview. But I am watching two things mainly: whether the speed holds on long outputs (2000+ tokens), and whether Google actually ships a usable API at a reasonable price. The code quality question is secondary if the pricing is wrong.&lt;/p&gt;

&lt;p&gt;If all three check out, I will probably move my batch processing over first and keep interactive coding on Claude. Speed matters less when you are pair programming. It matters a lot when you are processing data at scale.&lt;/p&gt;

&lt;p&gt;What got my attention is not this specific model. Someone proved the approach works at all. If Google can do it, Anthropic and OpenAI are probably working on their own versions. In a year we might look back at autoregressive-only models the way we look back at RNNs — technically functional but obviously not the final answer.&lt;/p&gt;

&lt;p&gt;Or the whole thing might hit a wall at 1000 tokens and we are back to business as usual. Could go either way.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>google</category>
      <category>machinelearning</category>
      <category>programming</category>
    </item>
    <item>
      <title>I Replaced Claude with Gemma 4 for a Weekend — Here's What Broke</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Wed, 08 Apr 2026 05:05:55 +0000</pubDate>
      <link>https://forem.com/jim_l_efc70c3a738e9f4baa7/i-replaced-claude-with-gemma-4-for-a-weekend-heres-what-broke-5bf9</link>
      <guid>https://forem.com/jim_l_efc70c3a738e9f4baa7/i-replaced-claude-with-gemma-4-for-a-weekend-heres-what-broke-5bf9</guid>
      <description>&lt;p&gt;I run five websites from Sydney and use AI models daily — for blog drafts, code fixes, SEO analysis, quick research. Most of my workflow runs on Claude Sonnet because it's consistent and doesn't need babysitting. So when Google dropped Gemma 4 on April 2, 2026 under Apache 2.0, I figured I'd stress-test it over a weekend before forming any opinions.&lt;/p&gt;

&lt;p&gt;Short version: it's genuinely impressive in places, mildly annoying in others, and the license alone changes a lot of the math.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Gemma 4 Actually Is
&lt;/h2&gt;

&lt;p&gt;Gemma 4 is Google's latest open-weight model family. Released April 2, 2026. Apache 2.0 license, which means you can use it commercially, modify it, redistribute it — no royalties, no restrictions on derivative works. That's meaningful.&lt;/p&gt;

&lt;p&gt;The family ships in several sizes: 4B, 12B, 27B, and a new 96B variant. The 27B is the one most people will actually run locally (needs roughly 20GB VRAM in full precision, or 12GB quantized to Q4).&lt;/p&gt;

&lt;p&gt;It's multimodal — image understanding built in, not bolted on. And there's genuine agentic scaffolding baked into the instruction-tuned variants, meaning it handles multi-step tool use more coherently than Gemma 3 did.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Actually Tested
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Test 1: Code generation for a Next.js component&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I gave it a prompt I regularly use with Claude: build me a React component that fetches data from a Supabase table, handles loading/error states, and renders a responsive table.&lt;/p&gt;

&lt;p&gt;Gemma 4 27B (via Ollama, quantized) produced working code on the second attempt. First attempt had a minor type error in TypeScript. Second attempt fixed it without me explaining what was wrong.&lt;/p&gt;

&lt;p&gt;Claude Sonnet would have nailed this on the first try. But Claude costs money per token. Gemma 4 running locally costs electricity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test 2: Document analysis (multimodal)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I threw a screenshot of a GA4 analytics dashboard at it and asked it to summarize traffic trends. Gemma 4 read the numbers correctly but its interpretation was generic. It told me sessions were down 14% without offering any hypothesis about why. Claude tends to make inferences. Gemma 4 reports rather than reasons.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test 3: SEO content editing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I fed it a 1,200-word blog post and asked it to identify thin sections. This went better than expected. It flagged two genuinely weak paragraphs, suggested adding a comparison table, and offered three alternative headline options that were actually good.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Surprise (Good and Bad)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Good surprise&lt;/strong&gt;: The 12B model is more capable than it has any right to be. I ran it on a machine with 8GB VRAM and it handled most single-turn tasks at a quality level I'd compare to GPT-3.5 era.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bad surprise&lt;/strong&gt;: Agentic tasks with multi-step tool use hit context length issues faster than expected. Around step four of a five-step workflow, it started losing track of earlier context.&lt;/p&gt;

&lt;p&gt;Also: it's verbose by default. Ask it a yes/no question with nuance, it writes three paragraphs.&lt;/p&gt;




&lt;h2&gt;
  
  
  How It Compares
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Gemma 4 27B&lt;/th&gt;
&lt;th&gt;Claude Sonnet 4.5&lt;/th&gt;
&lt;th&gt;GPT-4o&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cost&lt;/td&gt;
&lt;td&gt;Free (local) or ~$0.10/M via API&lt;/td&gt;
&lt;td&gt;~$3/$15 per M tokens&lt;/td&gt;
&lt;td&gt;~$2.5/$10 per M tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Context&lt;/td&gt;
&lt;td&gt;128K&lt;/td&gt;
&lt;td&gt;200K&lt;/td&gt;
&lt;td&gt;128K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code quality&lt;/td&gt;
&lt;td&gt;Good, 2nd attempt&lt;/td&gt;
&lt;td&gt;Excellent, 1st attempt&lt;/td&gt;
&lt;td&gt;Very good&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;License&lt;/td&gt;
&lt;td&gt;Apache 2.0 (fully open)&lt;/td&gt;
&lt;td&gt;Proprietary&lt;/td&gt;
&lt;td&gt;Proprietary&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The license column is doing more work than it looks. If you need AI costs that don't scale with usage, or on-prem deployment for compliance, Gemma 4 is now a serious option.&lt;/p&gt;




&lt;h2&gt;
  
  
  Who Should Use This
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Worth it if:&lt;/strong&gt; self-hosting, compliance requirements, fine-tuning experiments, or budget-conscious.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stick with Claude/GPT if:&lt;/strong&gt; you need top-tier multi-step reasoning, heavy document inference, or don't want to manage infrastructure.&lt;/p&gt;




&lt;p&gt;I'm not switching my main workflow off Claude. But I've moved quick classification tasks and a couple of internal scripts to a local Gemma 4 12B instance. That's probably $30-40/month in API calls I won't be making.&lt;/p&gt;

&lt;p&gt;Not a revolution, but a genuine shift in what's viable to run without a credit card.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>I Built 6 Free SEO Tools in One Day — Here's What I Learned</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Mon, 06 Apr 2026 01:38:48 +0000</pubDate>
      <link>https://forem.com/jim_l_efc70c3a738e9f4baa7/i-built-6-free-seo-tools-in-one-day-heres-what-i-learned-4gh6</link>
      <guid>https://forem.com/jim_l_efc70c3a738e9f4baa7/i-built-6-free-seo-tools-in-one-day-heres-what-i-learned-4gh6</guid>
      <description>&lt;p&gt;SEO tools are everywhere, but most are locked behind signups, API limits, or subscription walls. I wanted something I could actually use without friction — so I built 6 tools over a weekend and open-sourced the approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tools
&lt;/h2&gt;

&lt;p&gt;All run in-browser (4 client-side) or via lightweight server fetch (2 API routes). Zero external API costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Schema Markup Generator
&lt;/h3&gt;

&lt;p&gt;Visual form → valid JSON-LD for FAQPage, Article, HowTo, Product, Organization, BreadcrumbList. Click "Copy HTML" and paste into your &lt;code&gt;&amp;lt;head&amp;gt;&lt;/code&gt;. One-click link to Google Rich Results Test.&lt;/p&gt;

&lt;p&gt;Why I built it: I was manually typing JSON-LD for every blog post. Now it takes 30 seconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. LLMs.txt Generator
&lt;/h3&gt;

&lt;p&gt;Input any URL, get &lt;code&gt;llms.txt&lt;/code&gt; + &lt;code&gt;llms-full.txt&lt;/code&gt; following the &lt;a href="https://llmstxt.org" rel="noopener noreferrer"&gt;llmstxt.org&lt;/a&gt; standard. Fetches your sitemap, extracts titles/descriptions, formats everything.&lt;/p&gt;

&lt;p&gt;Why it matters: AI assistants like ChatGPT and Claude check this file when users reference your site. No llms.txt = missed citations.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Hreflang Tag Generator
&lt;/h3&gt;

&lt;p&gt;Add language/URL pairs, get self-referential hreflang HTML with x-default. Validates duplicates and missing tags.&lt;/p&gt;

&lt;p&gt;Tiny tool, but saves me 10 minutes every time I add a new language to a site.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Meta Title &amp;amp; Description Analyzer
&lt;/h3&gt;

&lt;p&gt;Pixel-accurate truncation check (Google measures in pixels, not characters). Live SERP preview, keyword density analysis, power word detection. Also flags "Best"/"Top" in titles per Google's Feb 2026 Core Update rules.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Robots.txt Tester
&lt;/h3&gt;

&lt;p&gt;Paste your robots.txt, test any URL path against 15+ user agents — including GPTBot, ClaudeBot, PerplexityBot, and Google-Extended. Shows exactly which rule matched and whether it's Allow or Disallow.&lt;/p&gt;

&lt;p&gt;Built this because blocking AI crawlers is becoming the default, but most people have no idea if their rules actually work.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. OG Image Preview
&lt;/h3&gt;

&lt;p&gt;Fetch any URL and see how it renders on Twitter, LinkedIn, Slack, and Discord. Each platform crops differently — this shows all four at once plus detects missing/broken tags.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Decisions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Client-side first&lt;/strong&gt;: 4 of 6 tools run entirely in the browser. No server, no API, no data retention.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server fetch only when needed&lt;/strong&gt;: LLMs.txt and OG Preview need to fetch external URLs (CORS), so they use Next.js API routes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero API cost&lt;/strong&gt;: No OpenAI, no paid APIs. Just &lt;code&gt;fetch()&lt;/code&gt; + regex parsing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic routing&lt;/strong&gt;: One &lt;code&gt;[tool]/page.tsx&lt;/code&gt; handles stubs for unreleased tools; specific routes override when ready.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I'd Do Differently
&lt;/h2&gt;

&lt;p&gt;Schema Generator could use client-side validation against Schema.org spec (currently just generates, doesn't validate fields). Meta Analyzer's pixel width estimation is approximate — a Canvas-based measurement would be more accurate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try Them
&lt;/h2&gt;

&lt;p&gt;All 6 tools are live at &lt;a href="https://openaitoolshub.org/seo-tools" rel="noopener noreferrer"&gt;openaitoolshub.org/seo-tools&lt;/a&gt;. No signup, no API key. Feedback welcome.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;What free SEO tools do you actually use daily? Curious what's missing from the landscape.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>showdev</category>
      <category>sideprojects</category>
      <category>webdev</category>
    </item>
    <item>
      <title>I Tested Cursor 3 Glass for a Week — The Agent-First IDE Is Real, But Not for Everyone</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Sun, 05 Apr 2026 00:07:33 +0000</pubDate>
      <link>https://forem.com/jim_l_efc70c3a738e9f4baa7/i-tested-cursor-3-glass-for-a-week-the-agent-first-ide-is-real-but-not-for-everyone-im0</link>
      <guid>https://forem.com/jim_l_efc70c3a738e9f4baa7/i-tested-cursor-3-glass-for-a-week-the-agent-first-ide-is-real-but-not-for-everyone-im0</guid>
      <description>&lt;p&gt;Cursor dropped version 3 on April 2 with a codename — Glass — and a rebuilt interface that moves the code editor into the passenger seat.&lt;/p&gt;

&lt;p&gt;The pitch: you describe tasks in natural language, AI agents write the code, and you orchestrate. It sounds like marketing copy until you actually open the Agents Window and see three parallel tasks running across different repos simultaneously.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Changed
&lt;/h2&gt;

&lt;p&gt;The old Cursor was a VS Code fork with an AI sidebar. Version 3 is something else entirely. The Agents Window is a separate workspace where each task gets its own context, its own file access, and its own execution thread. You can run local agents or cloud agents — the cloud ones persist even when you close your laptop.&lt;/p&gt;

&lt;p&gt;Design Mode is the other big addition. You can point at a UI element and describe what you want changed. It generates the code, previews the result, and you approve or reject. For React and Next.js projects, this worked surprisingly well in my testing. For anything with complex state management, it struggled.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Good Parts
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Parallel execution is genuine.&lt;/strong&gt; I ran a test where Agent 1 was refactoring a data layer while Agent 2 was building a new API endpoint. They didn't conflict. The context isolation means each agent sees a consistent snapshot of the codebase, and Cursor handles merging the changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-repo support.&lt;/strong&gt; You can open multiple repositories in a single workspace and run agents across them. For monorepo-heavy teams, this matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The prompt box as primary interface.&lt;/strong&gt; Instead of navigating menus and panels, you describe what you want. "Add error handling to all API routes in /src/api" — and an agent spins up, creates a plan, and starts executing. This felt natural after about 20 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Honest Problems
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Context window limits hit fast.&lt;/strong&gt; Large codebases — anything over roughly 50K lines — caused agents to lose track of earlier instructions. I had to break tasks into smaller chunks manually, which somewhat defeats the purpose.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud agents are slow.&lt;/strong&gt; Local agents respond in seconds. Cloud agents take 30-90 seconds to start, and they run on Cursor's infrastructure. If their servers are loaded (which happened twice during my week of testing), everything stalls.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Price jumped.&lt;/strong&gt; Pro is still $20/month, but the Business tier at $40/month is where you get unlimited cloud agent hours. The free tier is now almost unusable for real work — you get 5 agent sessions per day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It is no longer a code editor.&lt;/strong&gt; If you want fine-grained control over your code, Cursor 3 fights you. The interface prioritizes agent delegation over manual editing. Some developers will hate this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Should Care
&lt;/h2&gt;

&lt;p&gt;If you manage a team shipping features on tight timelines, Cursor 3's parallel agents could save real hours. If you are a solo developer who enjoys writing code, this might feel like a solution to a problem you don't have.&lt;/p&gt;

&lt;p&gt;The $2 billion ARR number tells you Cursor found its market. Whether that market includes you depends on how much of your coding you are willing to hand off to agents that are good — but not perfect.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I test AI coding tools as part of my workflow. Previously covered Claude Code, Windsurf, and OpenCode. All opinions are from actual project use, not benchmark screenshots.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;For my full review of Cursor 3 Glass and how it compares to other AI coding tools, see &lt;a href="https://www.openaitoolshub.org/en/blog/cursor-3-agent-first-review" rel="noopener noreferrer"&gt;this detailed comparison&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>productivity</category>
      <category>tooling</category>
    </item>
    <item>
      <title>My 3-Month Startup Directory Submission Journey — What Actually Moved the Needle</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Fri, 03 Apr 2026 23:10:02 +0000</pubDate>
      <link>https://forem.com/jim_l_efc70c3a738e9f4baa7/my-3-month-startup-directory-submission-journey-what-actually-moved-the-needle-gef</link>
      <guid>https://forem.com/jim_l_efc70c3a738e9f4baa7/my-3-month-startup-directory-submission-journey-what-actually-moved-the-needle-gef</guid>
      <description>&lt;p&gt;Over the last few months I submitted five websites to every free startup directory I could find. Not as a theoretical exercise — I needed backlinks. My domain rating was stuck at 20 and organic traffic was flat.&lt;/p&gt;

&lt;p&gt;Here is what actually happened.&lt;/p&gt;

&lt;h2&gt;
  
  
  Month 1: The Naive Phase
&lt;/h2&gt;

&lt;p&gt;I found a few GitHub repos listing 300+ directories and started submitting to everything. No filtering, no strategy. Just fill form, click submit, next.&lt;/p&gt;

&lt;p&gt;Success rate: roughly 40%.&lt;/p&gt;

&lt;p&gt;The other 60% was a mix of dead sites (404, parked domains, expired Bubble.io plans), paid-only directories pretending to be free, and forms that silently failed. I spent about 15 hours that first month and submitted to maybe 80 directories. Of those, about 30 actually listed my sites.&lt;/p&gt;

&lt;p&gt;The worst time wasters were directories running on Bubble.io with expired plans. They look legit until you hit submit and get a deployment error. I counted 12 of these in one week.&lt;/p&gt;

&lt;h2&gt;
  
  
  Month 2: Getting Strategic
&lt;/h2&gt;

&lt;p&gt;I started sorting directories by Ahrefs DR before submitting. Anything below DR 20 went to the bottom of the list. DR 50+ got done first.&lt;/p&gt;

&lt;p&gt;Three discoveries changed my approach:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Blog comments work.&lt;/strong&gt; I found a WordPress blog with DR 63 that gives dofollow links through the URL field in blog comments. One genuine comment with my website URL in the website field. No review process, no waiting. This single discovery was worth more than 20 low-DR directory submissions combined.&lt;/p&gt;

&lt;p&gt;I eventually compiled the full list of verified directories into a &lt;a href="https://openaitoolshub.org/en/blog/verified-startup-directories-submission-guide" rel="noopener noreferrer"&gt;startup directory submission guide&lt;/a&gt; with DR ratings and submission notes for each one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Profile backlinks are underrated.&lt;/strong&gt; Crunchbase (DR 91), Disqus (DR 91), StackShare (DR 89) — creating a profile on each of these takes under 10 minutes and gives you a link from a domain most people would pay good money for. Nobody talks about this because it is not exciting. But it works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Badge exchange is worth it.&lt;/strong&gt; Some directories like twelve.tools (DR 80) and wired.business (DR 73) require you to put a small badge in your site footer. In exchange you get a dofollow link from a high-DR domain. The math works out heavily in your favor.&lt;/p&gt;

&lt;h2&gt;
  
  
  Month 3: The Numbers
&lt;/h2&gt;

&lt;p&gt;After three months of systematic directory work across five websites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Domain rating&lt;/strong&gt;: 20 to 29 (Ahrefs)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Referring domains&lt;/strong&gt;: 15 to 72&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Directories submitted&lt;/strong&gt;: 200+&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Actually listed&lt;/strong&gt;: roughly 110&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dead or fake&lt;/strong&gt;: 60+&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Paid-only (despite claiming free)&lt;/strong&gt;: 30+&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dofollow confirmed&lt;/strong&gt;: about 70&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The directories that consistently showed up fastest in Ahrefs backlink reports: SaaSHub, ExactSeek, sitelike.org, twelve.tools, and Crunchbase profiles. Most others took 2-4 weeks to get crawled.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Would Tell Someone Starting Today
&lt;/h2&gt;

&lt;p&gt;Start with DR 50+ directories. The ROI on low-DR directories is almost zero for SEO purposes.&lt;/p&gt;

&lt;p&gt;Batch your submissions to 5-10 per day. Some directories share IP tracking and will flag rapid submissions.&lt;/p&gt;

&lt;p&gt;Keep a spreadsheet. Track: directory name, DR, submit URL, whether you need to log in, CAPTCHA type, and submission date. You will forget what you already submitted otherwise.&lt;/p&gt;

&lt;p&gt;Do not pay for directory submissions. Every directory worth submitting to has a free tier. The paid-only directories at $29-149 per listing are not worth it when free alternatives with similar or higher DR exist.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Shortlist: Start With These 10
&lt;/h2&gt;

&lt;p&gt;If I had to pick just 10 directories to submit to first:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Crunchbase (DR 91) — profile with website link&lt;/li&gt;
&lt;li&gt;twelve.tools (DR 80) — badge exchange, dofollow&lt;/li&gt;
&lt;li&gt;ExactSeek (DR 73) — simple form, dofollow, 1/day limit&lt;/li&gt;
&lt;li&gt;wired.business (DR 73) — badge exchange, dofollow&lt;/li&gt;
&lt;li&gt;sitelike.org (DR 71) — text CAPTCHA, dofollow&lt;/li&gt;
&lt;li&gt;Future Tools (DR 69) — AI tools focus&lt;/li&gt;
&lt;li&gt;SaaSHub (DR 55) — URL-only form, auto-detect&lt;/li&gt;
&lt;li&gt;SubmissionWebDirectory (DR 61) — image CAPTCHA&lt;/li&gt;
&lt;li&gt;Startup Inspire (DR 48) — multi-category&lt;/li&gt;
&lt;li&gt;Mamavation (DR 63) — blog comment, instant dofollow&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I documented over 200 directories with DR scores, submit URLs, link types, and specific notes about CAPTCHAs and gotchas. That resource covers everything in detail if you want to go deeper.&lt;/p&gt;

&lt;p&gt;The honest truth is that directory submissions alone will not get you to DR 50. But they are the foundation. Combined with profile backlinks, blog comment links, and content that naturally attracts links, the compound effect adds up faster than most people expect.&lt;/p&gt;

</description>
      <category>buildinpublic</category>
      <category>devjournal</category>
      <category>marketing</category>
      <category>startup</category>
    </item>
  </channel>
</rss>
