<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Naweid Hassan</title>
    <description>The latest articles on Forem by Naweid Hassan (@naweid_hassan_d7d03584c2e).</description>
    <link>https://forem.com/naweid_hassan_d7d03584c2e</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/naweid_hassan_d7d03584c2e"/>
    <language>en</language>
    <item>
      <title>I Built a Chrome Extension That Fact-Checks AI Outputs: Here's How It Works</title>
      <dc:creator>Naweid Hassan</dc:creator>
      <pubDate>Mon, 30 Mar 2026 01:44:50 +0000</pubDate>
      <link>https://forem.com/naweid_hassan_d7d03584c2e/i-built-a-chrome-extension-that-fact-checks-ai-outputs-heres-how-it-works-d7h</link>
      <guid>https://forem.com/naweid_hassan_d7d03584c2e/i-built-a-chrome-extension-that-fact-checks-ai-outputs-heres-how-it-works-d7h</guid>
      <description>&lt;p&gt;About 27% of AI outputs contain fabricated claims. I kept running into this problem — ChatGPT confidently citing studies that don't exist, Gemini inventing statistics, Claude hallucinating facts. So I built a tool to catch it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Aretify?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.aretify.com" rel="noopener noreferrer"&gt;Aretify&lt;/a&gt; is a Chrome extension that adds a "Verify" button to AI responses on ChatGPT, Claude, and Gemini. One click extracts every factual claim and checks it against real sources.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; Next.js on Vercel&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; FastAPI on Railway&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database:&lt;/strong&gt; Supabase (PostgreSQL)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cache:&lt;/strong&gt; Upstash Redis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extension:&lt;/strong&gt; Chrome Manifest V3, side panel architecture&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Verification Works
&lt;/h2&gt;

&lt;p&gt;When you click "Verify," here's what happens:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Claim Extraction&lt;/strong&gt; — The AI response is sent to Groq for fast claim extraction. Each individual factual claim is isolated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Evidence Search&lt;/strong&gt; — Each claim is searched against 15+ evidence sources simultaneously: Semantic Scholar, PubMed, OpenAlex, Crossref, Europe PMC, Wikipedia, Wikidata, Google Fact Check, GDELT, NewsAPI, Brave Search, SerpAPI, and more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scoring&lt;/strong&gt; — Claims are scored based on how well they match evidence. Each gets a status: Verified, Partial Match, or Unverified. The overall response gets an AretifyScore from 0-100.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ethics Analysis&lt;/strong&gt; — This is the part I'm most proud of. My father is a philosophy professor, and he developed a 35-page framework covering 8 philosophical traditions (utilitarian, deontological, virtue ethics, care ethics, and more). Every verification runs through this ethics engine to evaluate the ethical implications of AI outputs — not just accuracy.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What I Learned Building This
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;API redundancy matters.&lt;/strong&gt; With 15+ evidence sources, some are always down. I implemented circuit breakers for unreliable APIs (Tavily, GDELT) so the system degrades gracefully instead of failing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chrome Extension development is painful.&lt;/strong&gt; Manifest V3 restrictions, CSP issues across different AI platforms, and the fact that &lt;code&gt;.ico&lt;/code&gt; files cause context invalidation in MV3 (Chrome requires &lt;code&gt;.png&lt;/code&gt;) — these were hours of debugging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Canonical URLs matter from day one.&lt;/strong&gt; I launched with inconsistent www vs non-www URLs and Google was confused about which pages to index. Fix this before you launch, not after.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;The extension is free — 10 verifications per day, no credit card required.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Website: &lt;a href="https://www.aretify.com" rel="noopener noreferrer"&gt;aretify.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Chrome Web Store: &lt;a href="https://chromewebstore.google.com/detail/aretify-ai-output-verifie/fepnnhblfpfcoecdnmcccfjajfiijpmk" rel="noopener noreferrer"&gt;Install extension&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'd love feedback from the dev community — especially on the verification pipeline and extension UX. What would you want from a tool like this?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>I Built a Chrome Extension That Fact-Checks AI Outputs — Here's How It Works</title>
      <dc:creator>Naweid Hassan</dc:creator>
      <pubDate>Mon, 30 Mar 2026 01:32:12 +0000</pubDate>
      <link>https://forem.com/naweid_hassan_d7d03584c2e/i-built-a-chrome-extension-that-fact-checks-ai-outputs-heres-how-it-works-49dp</link>
      <guid>https://forem.com/naweid_hassan_d7d03584c2e/i-built-a-chrome-extension-that-fact-checks-ai-outputs-heres-how-it-works-49dp</guid>
      <description>&lt;p&gt;About 27% of AI outputs contain fabricated claims. I kept running into this problem — ChatGPT confidently citing studies that don't exist, Gemini inventing statistics, Claude hallucinating facts. So I built a tool to catch it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Aretify?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.aretify.com" rel="noopener noreferrer"&gt;Aretify&lt;/a&gt; is a Chrome extension that adds a "Verify" button to AI responses on ChatGPT, Claude, and Gemini. One click extracts every factual claim and checks it against real sources.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; Next.js on Vercel&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; FastAPI on Railway&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database:&lt;/strong&gt; Supabase (PostgreSQL)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cache:&lt;/strong&gt; Upstash Redis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extension:&lt;/strong&gt; Chrome Manifest V3, side panel architecture&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Verification Works
&lt;/h2&gt;

&lt;p&gt;When you click "Verify," here's what happens:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Claim Extraction&lt;/strong&gt; — The AI response is sent to Groq for fast claim extraction. Each individual factual claim is isolated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Evidence Search&lt;/strong&gt; — Each claim is searched against 15+ evidence sources simultaneously: Semantic Scholar, PubMed, OpenAlex, Crossref, Europe PMC, Wikipedia, Wikidata, Google Fact Check, GDELT, NewsAPI, Brave Search, SerpAPI, and more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scoring&lt;/strong&gt; — Claims are scored based on how well they match evidence. Each gets a status: Verified, Partial Match, or Unverified. The overall response gets an AretifyScore from 0-100.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ethics Analysis&lt;/strong&gt; — This is the part I'm most proud of. My father is a philosophy professor, and he developed a 35-page framework covering 8 philosophical traditions (utilitarian, deontological, virtue ethics, care ethics, and more). Every verification runs through this ethics engine to evaluate the ethical implications of AI outputs — not just accuracy.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What I Learned Building This
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;API redundancy matters.&lt;/strong&gt; With 15+ evidence sources, some are always down. I implemented circuit breakers for unreliable APIs (Tavily, GDELT) so the system degrades gracefully instead of failing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chrome Extension development is painful.&lt;/strong&gt; Manifest V3 restrictions, CSP issues across different AI platforms, and the fact that &lt;code&gt;.ico&lt;/code&gt; files cause context invalidation in MV3 (Chrome requires &lt;code&gt;.png&lt;/code&gt;) — these were hours of debugging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Canonical URLs matter from day one.&lt;/strong&gt; I launched with inconsistent www vs non-www URLs and Google was confused about which pages to index. Fix this before you launch, not after.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;The extension is free — 10 verifications per day, no credit card required.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Website: &lt;a href="https://www.aretify.com" rel="noopener noreferrer"&gt;aretify.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Chrome Web Store: &lt;a href="https://chromewebstore.google.com/detail/aretify-ai-output-verifie/fepnnhblfpfcoecdnmcccfjajfiijpmk" rel="noopener noreferrer"&gt;Install extension&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'd love feedback from the dev community — especially on the verification pipeline and extension UX. What would you want from a tool like this?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>opensource</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
