<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Watson Foglift</title>
    <description>The latest articles on Forem by Watson Foglift (@watsonfoglift).</description>
    <link>https://forem.com/watsonfoglift</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/watsonfoglift"/>
    <language>en</language>
    <item>
      <title>AI Citations From Google's Top 10 Dropped From 76% to 38%. Here's What Actually Drives AI Visibility.</title>
      <dc:creator>Watson Foglift</dc:creator>
      <pubDate>Sat, 11 Apr 2026 03:09:42 +0000</pubDate>
      <link>https://forem.com/watsonfoglift/ai-citations-from-googles-top-10-dropped-from-76-to-38-heres-what-actually-drives-ai-24i9</link>
      <guid>https://forem.com/watsonfoglift/ai-citations-from-googles-top-10-dropped-from-76-to-38-heres-what-actually-drives-ai-24i9</guid>
      <description>&lt;p&gt;There's a stat circulating that should worry anyone whose traffic strategy depends on Google rankings: &lt;strong&gt;AI Overview citations from top-10 organic pages dropped from 76% to 38%&lt;/strong&gt; (Seer Interactive, analysis of 863K keywords, Feb 2026).&lt;/p&gt;

&lt;p&gt;That means Google's AI Overviews are now pulling the majority of their cited sources from pages that &lt;em&gt;don't&lt;/em&gt; rank in the traditional top 10.&lt;/p&gt;

&lt;p&gt;If you've spent years optimizing for Google rankings, this doesn't mean your work was wasted. But it does mean the rules for getting cited by AI are different from the rules for ranking in search.&lt;/p&gt;

&lt;h2&gt;
  
  
  The data: Google rank barely predicts AI citation
&lt;/h2&gt;

&lt;p&gt;Three independent studies paint a consistent picture:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Study&lt;/th&gt;
&lt;th&gt;Finding&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Seer Interactive (863K keywords, 2026)&lt;/td&gt;
&lt;td&gt;AI Overview citations from top-10 pages: 76% → 38%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Chatoptic (2025)&lt;/td&gt;
&lt;td&gt;Correlation between Google rank and ChatGPT citation: &lt;strong&gt;0.034&lt;/strong&gt; (essentially zero)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Chatoptic (2025)&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;28% of the most-cited domains&lt;/strong&gt; in AI responses had zero traditional Google visibility&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That 0.034 correlation is striking. For context, a correlation of 1.0 means perfect prediction, and anything below 0.1 is considered negligible. Google rank and AI citation are, statistically, almost independent variables.&lt;/p&gt;

&lt;p&gt;And the 28% figure is arguably the most important: more than a quarter of the domains AI engines prefer to cite have &lt;em&gt;no traditional search visibility at all&lt;/em&gt;. These aren't SEO winners — they're authority winners.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually drives AI citations
&lt;/h2&gt;

&lt;p&gt;If Google rank doesn't predict AI citation, what does? Five factors have the strongest empirical support:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Referring domains (strongest predictor)
&lt;/h3&gt;

&lt;p&gt;An SE Ranking study of 129,000 domains found that referring domains — the number of unique sites linking to you — is the single strongest predictor of AI citation. This makes sense: AI models learn from web data, and links are the web's native authority signal.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Content position and structure
&lt;/h3&gt;

&lt;p&gt;Superlines' 2026 citation analysis found that &lt;strong&gt;44.2% of all LLM citations come from the first 30% of article text&lt;/strong&gt;. AI engines extract from the top of your content, not the bottom. Front-load your strongest claims, data, and definitions.&lt;/p&gt;

&lt;p&gt;Pages with comparison tables containing 3+ data tables earn &lt;strong&gt;25.7% more citations&lt;/strong&gt; (Superlines, 2026).&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Freshness
&lt;/h3&gt;

&lt;p&gt;Seer Interactive found that &lt;strong&gt;71% of ChatGPT citations&lt;/strong&gt; reference content published between 2023 and 2025. Digital Bloom's analysis showed pages updated within 30 days get &lt;strong&gt;3.2x more AI citations&lt;/strong&gt; than stale equivalents.&lt;/p&gt;

&lt;p&gt;If your best content was last updated in 2023, it's losing ground to recently published competitors.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Statistics and original data
&lt;/h3&gt;

&lt;p&gt;Aggarwal et al.'s peer-reviewed GEO study (KDD 2024) found that incorporating statistics into content increased AI visibility by &lt;strong&gt;33%&lt;/strong&gt; and adding quotations increased it by &lt;strong&gt;41%&lt;/strong&gt;. For lower-ranked sites, citing external sources boosted visibility by &lt;strong&gt;115%&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;AI engines favor content that makes specific, verifiable claims over content that speaks in generalities.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Brand web mentions
&lt;/h3&gt;

&lt;p&gt;Brand mentions across the web account for roughly &lt;strong&gt;35% of citation weight&lt;/strong&gt; (SE Ranking, 129K domains). This isn't just backlinks — it's any reference to your brand on other domains. Being discussed, recommended, and referenced on third-party sites signals to AI models that you're a trusted entity.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means for your strategy
&lt;/h2&gt;

&lt;p&gt;The practical implication: &lt;strong&gt;building for AI visibility is not a refinement of SEO — it's a parallel discipline.&lt;/strong&gt; Some tactics overlap (quality content, good structure), but the ranking factors diverge significantly.&lt;/p&gt;

&lt;p&gt;Here's a checklist based on the data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] &lt;strong&gt;Front-load your content.&lt;/strong&gt; Put your best data, definitions, and conclusions in the first third of every page.&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Update key pages monthly.&lt;/strong&gt; The 3.2x freshness multiplier is real. Set a calendar reminder.&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Add specific data to every claim.&lt;/strong&gt; Not "conversion rates increase" but "conversion rates increased 23% over 6 months (Source, Year)."&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Build referring domains.&lt;/strong&gt; The hardest and highest-leverage factor. Original research, data, and tools that others want to cite.&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Monitor your AI presence directly.&lt;/strong&gt; Don't assume Google rankings translate. Query ChatGPT, Perplexity, and Claude with questions your content should answer and check if you appear.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The 76% → 38% decline is still trending downward. The window to build AI authority before your competitors do is closing.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Seer Interactive — AI Overview citations from top-10 pages dropped from 76% to 38% (863K keywords, Feb 2026); 71% of ChatGPT citations from 2023-2025 content; 46% of AI-powered interactions use integrated search&lt;/li&gt;
&lt;li&gt;Chatoptic — 0.034 correlation between Google rank and ChatGPT citation; 28% of most-cited domains have zero Google visibility&lt;/li&gt;
&lt;li&gt;SE Ranking — referring domains as strongest citation predictor (129,000 domain study); brand mentions = 35% of citation weight&lt;/li&gt;
&lt;li&gt;Superlines, "AI Search Statistics 2026" — 44.2% of citations from first 30% of article text; comparison tables with 3+ tables earn 25.7% more citations&lt;/li&gt;
&lt;li&gt;Digital Bloom, "Content Freshness and AI Citation," 2025 — 30-day update = 3.2x citation lift&lt;/li&gt;
&lt;li&gt;Aggarwal et al., "GEO: Generative Engine Optimization," KDD 2024 — statistics +33%, quotations +41%, source citation +115% for lower-ranked sites&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>seo</category>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>FAQ Schema Gets You 2.7x More AI Citations. But Not for the Reason You Think.</title>
      <dc:creator>Watson Foglift</dc:creator>
      <pubDate>Fri, 10 Apr 2026 00:12:23 +0000</pubDate>
      <link>https://forem.com/watsonfoglift/faq-schema-gets-you-27x-more-ai-citations-but-not-for-the-reason-you-think-5a04</link>
      <guid>https://forem.com/watsonfoglift/faq-schema-gets-you-27x-more-ai-citations-but-not-for-the-reason-you-think-5a04</guid>
      <description>&lt;p&gt;A 2025 Relixir study found that pages with FAQPage schema achieve a 41% AI citation rate versus 15% without — roughly 2.7x higher. That's a real number from a real study.&lt;/p&gt;

&lt;p&gt;But here's the thing: &lt;strong&gt;AI models don't parse your JSON-LD as structured data.&lt;/strong&gt; They tokenize it as raw text, the same way they'd read a paragraph.&lt;/p&gt;

&lt;p&gt;We just added FAQ schema to 36 pages on our site. Before we did, we wanted to understand &lt;em&gt;why&lt;/em&gt; it works — because the mechanism matters more than the correlation. Here's what we found.&lt;/p&gt;

&lt;h2&gt;
  
  
  The experiment that changed how I think about schema
&lt;/h2&gt;

&lt;p&gt;In February 2026, SEO researcher Mark Williams-Cook ran a controlled experiment. He created a page for a fake company and embedded an address &lt;em&gt;exclusively&lt;/em&gt; inside invalid, made-up JSON-LD schema — not in any visible page content. The schema type didn't even exist.&lt;/p&gt;

&lt;p&gt;Both ChatGPT and Perplexity successfully extracted and returned the address.&lt;/p&gt;

&lt;p&gt;That tells us two things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;LLMs &lt;em&gt;can&lt;/em&gt; read JSON-LD — they tokenize it like any other text on the page.&lt;/li&gt;
&lt;li&gt;LLMs &lt;em&gt;don't&lt;/em&gt; parse the semantic structure of schema — they treated an invalid schema type identically to a valid one.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is a crucial distinction. When Google processes your FAQPage schema, it parses the structure and feeds it into the Knowledge Graph. When ChatGPT reads your page, it just... reads all the text, including the JSON-LD block, as tokens.&lt;/p&gt;

&lt;h2&gt;
  
  
  So why does FAQ schema correlate with higher citation rates?
&lt;/h2&gt;

&lt;p&gt;If LLMs don't understand schema structure, why the 2.7x difference? Four mechanisms are at play:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The visible Q&amp;amp;A content (the biggest factor)
&lt;/h3&gt;

&lt;p&gt;Every good FAQ schema implementation includes a visible FAQ section on the page. That visible content — clear questions with concise answers — is &lt;em&gt;exactly&lt;/em&gt; the format LLMs are optimized to extract. When ChatGPT is looking for "What is the difference between X and Y?", a visible FAQ section with that exact question is an easy win.&lt;/p&gt;

&lt;p&gt;This is the mechanism that actually drives most of the citation lift. Not the JSON-LD — the content.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The JSON-LD as readable text
&lt;/h3&gt;

&lt;p&gt;Since LLMs tokenize JSON-LD as text, your FAQPage schema becomes an additional, cleanly-formatted representation of your content. A well-structured JSON-LD block repeats your key Q&amp;amp;A pairs in a format that's easy for attention mechanisms to pick up on.&lt;/p&gt;

&lt;p&gt;Think of it as giving the model a second, structured summary of your content — in the same page.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Google and Bing's Knowledge Graph pipeline
&lt;/h3&gt;

&lt;p&gt;Fabrice Canel, Principal Product Manager at Bing, stated at SMX Munich 2025: "Schema markup helps Microsoft's LLMs understand your content." Google's Search Relations team made similar statements at Search Central Live Madrid (April 2025).&lt;/p&gt;

&lt;p&gt;For AI Overviews and Bing Copilot specifically, schema &lt;em&gt;is&lt;/em&gt; parsed structurally. These platforms have Knowledge Graph infrastructure that traditional LLMs don't. So FAQ schema has a direct effect on two of the six major AI answer surfaces.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Selection bias (the uncomfortable one)
&lt;/h3&gt;

&lt;p&gt;Sites that implement FAQ schema tend to be sites that care about content quality, update frequently, and invest in SEO. The 2.7x correlation partially reflects the overall quality of sites that bother with schema — not just the schema itself. No study I've found controls for this.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we actually built
&lt;/h2&gt;

&lt;p&gt;We needed FAQ schema on 36 pages: 24 comparison pages and 12 blog posts. Here's the approach:&lt;/p&gt;

&lt;h3&gt;
  
  
  For comparison pages (dynamic template)
&lt;/h3&gt;

&lt;p&gt;Our comparison pages use a shared template. We generate 5 FAQ items per page from the existing comparison data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;faqs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;question&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`What is the main difference between Foglift and &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;?`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;heroDescription&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;question&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`How does &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; pricing compare to Foglift?`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; starts at &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;competitorStartPrice&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;. 
             Foglift offers a free plan with full website audits, 
             then paid monitoring from $49/month.`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="c1"&gt;// ... 3 more questions generated from page data&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;faqSchema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@context&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://schema.org&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;FAQPage&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;mainEntity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;faqs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;faq&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Question&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;faq&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;question&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;acceptedAnswer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Answer&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;faq&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;})),&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key decisions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Generate from existing data&lt;/strong&gt; — no hardcoded FAQ text. If pricing changes, the FAQs update automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;5 questions per page&lt;/strong&gt; — enough for depth, not so many that it feels like keyword stuffing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plain text answers&lt;/strong&gt; — strip HTML before injecting into JSON-LD.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  For blog posts (static per-post)
&lt;/h3&gt;

&lt;p&gt;Each blog post gets a hand-written faqJsonLd constant with 4 Q&amp;amp;As specific to the post's topic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;faqJsonLd&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@context&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://schema.org&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;FAQPage&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;mainEntity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Question&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;How do AI search engines decide which websites to cite?&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;acceptedAnswer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Answer&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;A 2025 SE Ranking study of 129,000 domains found that brand web mentions are the strongest predictor (35% weight), followed by referring domains, content freshness, and content depth.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="c1"&gt;// ... 3 more with specific data&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key decisions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data-backed answers only&lt;/strong&gt; — every FAQ answer cites a specific source with sample size and year.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;4 per post&lt;/strong&gt; — we tried more, but after 4 the quality drops and answers start restating each other.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The visible section (this is the part that actually matters)
&lt;/h3&gt;

&lt;p&gt;Both implementations render a visible accordion FAQ section that matches the schema:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h2&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Frequently Asked Questions&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h2&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;faqJsonLd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mainEntity&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;faq&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;details&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;open&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;faq&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;faq&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;acceptedAnswer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;details&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;))}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We use details/summary instead of custom accordion components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zero JavaScript — works with SSR/SSG&lt;/li&gt;
&lt;li&gt;Semantic HTML — details has built-in accessibility&lt;/li&gt;
&lt;li&gt;First item open by default — gives crawlers immediate visible content&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What we measured
&lt;/h2&gt;

&lt;p&gt;Before adding FAQ schema + visible FAQ sections, our AEO (Answer Engine Optimization) scores looked like this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Page type&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;th&gt;AEO score before&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Comparison pages&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;td&gt;41-61&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Blog posts&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;63-66&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Homepage&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;88&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The homepage scored highest because it already had structured FAQ content. The comparison pages scored lowest because they had minimal structured data.&lt;/p&gt;

&lt;p&gt;After the upgrade, we're waiting on a deploy to measure the after. Based on the research, here's what we expect:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AEO improvement:&lt;/strong&gt; We expect comparison pages to jump from 41-61 to the 75-85 range.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI citation probability:&lt;/strong&gt; Too early to measure directly. Our AI Visibility Check baseline shows 0/35 engine checks mentioning us — so we'll know if it moves.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What we DON'T expect:&lt;/strong&gt; A 44% citation lift from schema alone. (If you're curious why, &lt;a href="https://dev.to/watsonfoglift/that-44-ai-citation-lift-from-schema-markup-stat-i-tried-to-find-the-primary-source-2hm4"&gt;I wrote about that&lt;/a&gt;.)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The takeaway
&lt;/h2&gt;

&lt;p&gt;FAQ schema works. The 2.7x correlation is real. But the mechanism is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Visible Q&amp;amp;A content&lt;/strong&gt; is what LLMs actually extract (biggest effect)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JSON-LD gives LLMs a second text representation&lt;/strong&gt; of your key Q&amp;amp;As (smaller but real effect)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Google/Bing Knowledge Graph&lt;/strong&gt; parses schema structurally for AI Overviews (platform-specific effect)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Selection bias&lt;/strong&gt; inflates the correlation (unmeasured confounder)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you only add the JSON-LD without visible FAQ content, you're capturing effects #2 and #3 but missing #1 — which is the largest factor. If you only add visible FAQ content without schema, you get #1 but miss #2 and #3.&lt;/p&gt;

&lt;p&gt;The move is both layers. That's what we built.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;We built &lt;a href="https://foglift.io" rel="noopener noreferrer"&gt;Foglift&lt;/a&gt; to measure exactly this kind of thing — AEO scores, AI visibility, and the gap between your SEO readiness and your AI search readiness. The free scan shows you where your FAQ, schema, and content depth stand.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Relixir (2025) — FAQPage schema citation rate study: 41% vs 15% citation rate&lt;/li&gt;
&lt;li&gt;Mark Williams-Cook (February 2026) — Controlled experiment on LLM JSON-LD tokenization&lt;/li&gt;
&lt;li&gt;Fabrice Canel, Bing Principal PM, SMX Munich 2025&lt;/li&gt;
&lt;li&gt;Google Search Central Live Madrid, April 2025&lt;/li&gt;
&lt;li&gt;Dunn et al., Nature Communications, February 2024&lt;/li&gt;
&lt;li&gt;Aggarwal et al., "Generative Engine Optimization," KDD 2024&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>showdev</category>
      <category>seo</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
    <item>
      <title>We Scanned 240 Websites for AI Search Readiness. Your SEO Score Doesn't Predict Your AI Score.</title>
      <dc:creator>Watson Foglift</dc:creator>
      <pubDate>Thu, 09 Apr 2026 00:07:26 +0000</pubDate>
      <link>https://forem.com/watsonfoglift/we-scanned-240-websites-for-ai-search-readiness-your-seo-score-doesnt-predict-your-ai-score-35bn</link>
      <guid>https://forem.com/watsonfoglift/we-scanned-240-websites-for-ai-search-readiness-your-seo-score-doesnt-predict-your-ai-score-35bn</guid>
      <description>&lt;p&gt;We built a free website audit tool that scores sites across SEO, GEO (Generative Engine Optimization), AEO (Answer Engine Optimization), security, performance, and accessibility. After 240 real scans from March-April 2026, one pattern jumped out:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sites that ace traditional SEO are often failing at AI search readiness.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's the data.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 39-Point Gap
&lt;/h2&gt;

&lt;p&gt;Across 240 scans, here are the median scores by category:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Median Score&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Accessibility&lt;/td&gt;
&lt;td&gt;86.5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SEO&lt;/td&gt;
&lt;td&gt;85&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GEO Readiness&lt;/td&gt;
&lt;td&gt;85&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Performance&lt;/td&gt;
&lt;td&gt;69&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AEO&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;46&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;30&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;SEO median: &lt;strong&gt;85&lt;/strong&gt;. AEO median: &lt;strong&gt;46&lt;/strong&gt;. That's a 39-point gap.&lt;/p&gt;

&lt;p&gt;The sites in our dataset generally have solid traditional SEO — clean title tags, meta descriptions, proper heading hierarchy, fast load times. But when you measure what AI answer engines actually need to extract and cite your content, most sites fall apart.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why SEO Score Doesn't Predict AI Citation
&lt;/h2&gt;

&lt;p&gt;This isn't just our data. A 2025 Chatoptic study of 1,000 queries found only a &lt;strong&gt;0.034 correlation&lt;/strong&gt; between Google search rank and ChatGPT citation likelihood. That's effectively zero.&lt;/p&gt;

&lt;p&gt;Even more striking: &lt;strong&gt;28% of the most-cited sites in ChatGPT have zero Google search visibility&lt;/strong&gt; (Profound, 2025). AI citation is a separate channel — not an SEO side effect.&lt;/p&gt;

&lt;p&gt;So what &lt;em&gt;does&lt;/em&gt; predict AI citation?&lt;/p&gt;

&lt;p&gt;According to SE Ranking's analysis of 129,000 domains, the top factors are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Brand web mentions&lt;/strong&gt; — 35% weight&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Referring domains&lt;/strong&gt; (backlinks) — strong correlation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content freshness&lt;/strong&gt; — 71% of ChatGPT citations come from 2023-2025 content (Seer Interactive, 2025). Content updated within 30 days gets 3.2x more citations (Digital Bloom, 2025)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content structure for extraction&lt;/strong&gt; — FAQ sections, clear headings, direct answers to questions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Notice what's &lt;em&gt;not&lt;/em&gt; on the list: page speed scores, meta tag optimization, keyword density — the traditional SEO checklist.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Three Things 90% of Sites Are Missing
&lt;/h2&gt;

&lt;p&gt;From our 240-scan dataset, these are the most common gaps in AEO readiness:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Security headers (60% fail rate)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;66% missing Content Security Policy&lt;/li&gt;
&lt;li&gt;57% missing X-Frame-Options&lt;/li&gt;
&lt;li&gt;52% missing X-Content-Type-Options&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why does this matter for AI? AI crawlers (GPTBot, ClaudeBot, PerplexityBot) respect security signals. A site with poor security headers signals lower trustworthiness. Google's Search Quality Rater Guidelines already emphasize E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) — security is part of the Trust signal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. No FAQ sections (37% of sites)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;FAQ pages are one of the easiest wins for AI citation. AI engines love structured Q&amp;amp;A because it maps directly to how users query them. The Aggarwal et al. GEO study (KDD 2024) found that adding statistics to content improved AI engine visibility by +33%, and adding quotations from authoritative sources improved it by +41%.&lt;/p&gt;

&lt;p&gt;FAQ sections naturally lend themselves to both patterns — they frame a specific question, then answer it with data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. No structured data (36% of sites)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Over a third of sites have zero schema markup. While the direct causal link between schema and AI citation is still unconfirmed by most AI providers (Google and Microsoft acknowledge it; OpenAI, Perplexity, and Anthropic haven't disclosed), schema helps AI crawlers understand entity relationships — what your brand is, what you offer, how you relate to your industry.&lt;/p&gt;

&lt;p&gt;A Nature Communications study (Feb 2024) demonstrated that knowledge graphs built from structured data improve LLM factual accuracy. More structured data means better entity extraction means more accurate citations.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Score Distribution
&lt;/h2&gt;

&lt;p&gt;Here's how the 240 sites distributed:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Score Range&lt;/th&gt;
&lt;th&gt;% of Sites&lt;/th&gt;
&lt;th&gt;Label&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;90-100&lt;/td&gt;
&lt;td&gt;11.3%&lt;/td&gt;
&lt;td&gt;Excellent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;80-89&lt;/td&gt;
&lt;td&gt;8.3%&lt;/td&gt;
&lt;td&gt;Good&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;70-79&lt;/td&gt;
&lt;td&gt;19.2%&lt;/td&gt;
&lt;td&gt;Fair&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;60-69&lt;/td&gt;
&lt;td&gt;28.3%&lt;/td&gt;
&lt;td&gt;Needs Work&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;50-59&lt;/td&gt;
&lt;td&gt;10.8%&lt;/td&gt;
&lt;td&gt;Poor&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Below 50&lt;/td&gt;
&lt;td&gt;22.1%&lt;/td&gt;
&lt;td&gt;Critical&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;61.2% of sites scored below 70.&lt;/strong&gt; The largest cluster (28.3%) sits in the 60-69 range — functional for traditional search, but with significant blind spots for AI engines.&lt;/p&gt;

&lt;p&gt;Only 19.6% scored 80+. And this is a &lt;em&gt;self-selected&lt;/em&gt; sample of people who actively sought out an AI readiness audit. The broader web is likely worse.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Developers
&lt;/h2&gt;

&lt;p&gt;If you're building websites — for yourself or clients — the SEO checklist you've internalized is necessary but not sufficient. The sites winning AI citations in 2026 are the ones that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Structure content for extraction&lt;/strong&gt; — clear H2/H3 hierarchy, FAQ sections, direct answers in the first paragraph&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintain freshness&lt;/strong&gt; — update key pages at least monthly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build entity identity&lt;/strong&gt; — Organization schema, consistent brand mentions, authoritative backlinks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secure the basics&lt;/strong&gt; — CSP, HSTS, X-Frame-Options (these take 5 minutes to add)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The 39-point gap between SEO and AEO is an opportunity. Most of your competitors haven't noticed it yet.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Data source: 240 website scans via Foglift's free audit tool (March 14 - April 8, 2026). Full methodology and detailed findings in our &lt;a href="https://foglift.io/blog/ai-search-readiness-study-2026" rel="noopener noreferrer"&gt;research report&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;External citations: Chatoptic (2025, 1,000 queries), SE Ranking (129,000 domains), Seer Interactive (2025), Digital Bloom (2025), Aggarwal et al. (KDD 2024), Profound (2025), Nature Communications (Feb 2024).&lt;/em&gt;&lt;/p&gt;

</description>
      <category>seo</category>
      <category>ai</category>
      <category>webdev</category>
      <category>showdev</category>
    </item>
    <item>
      <title>That '44% AI Citation Lift from Schema Markup' Stat? I Tried to Find the Primary Source.</title>
      <dc:creator>Watson Foglift</dc:creator>
      <pubDate>Wed, 08 Apr 2026 18:11:23 +0000</pubDate>
      <link>https://forem.com/watsonfoglift/that-44-ai-citation-lift-from-schema-markup-stat-i-tried-to-find-the-primary-source-2hm4</link>
      <guid>https://forem.com/watsonfoglift/that-44-ai-citation-lift-from-schema-markup-stat-i-tried-to-find-the-primary-source-2hm4</guid>
      <description>&lt;p&gt;If you've read any article about optimizing for AI search engines in the past year, you've probably seen this claim:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Adding schema markup increases AI citations by 44%."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It shows up in vendor blogs, agency whitepapers, conference slides, and "ultimate guides to GEO." It's one of the most-cited statistics in the generative engine optimization space. And as far as I can tell, &lt;strong&gt;it doesn't trace back to an actual study.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I spent a day trying to find the primary source. Here's what I found.&lt;/p&gt;

&lt;h2&gt;
  
  
  The citation trail
&lt;/h2&gt;

&lt;p&gt;The stat is almost always attributed to BrightEdge — a legitimate enterprise SEO platform with real research capabilities. But the trail gets murky fast:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Blog posts cite "BrightEdge research"&lt;/strong&gt; — no link, no study title, no methodology.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Some link to a BrightEdge article&lt;/strong&gt; about structured data and AI features. That article describes how structured data can improve inclusion in AI-generated search results. It does not contain a "44%" figure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Others link to a BrightEdge webinar&lt;/strong&gt; or press release about AI Overviews. These discuss structured data advantages in general terms. No "44% citation lift" metric.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;None that I found link to a study with sample size, methodology, or raw data.&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The actual BrightEdge research I could verify says something much more nuanced: structured data helps search engines (including AI features) understand your content. That's a process claim, not a measurement claim. The jump from "helps understand" to "44% more citations" happens somewhere in the marketing telephone game.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters more than you'd think
&lt;/h2&gt;

&lt;p&gt;This isn't just pedantic source-checking. The "44%" number shapes real budget decisions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Marketing teams use it to justify schema markup projects&lt;/li&gt;
&lt;li&gt;Agencies cite it in client pitches&lt;/li&gt;
&lt;li&gt;Content strategies get built around the assumption that schema is a 44% lever&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the number is fabricated (or wildly miscontextualized), those decisions are built on sand.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the actual research says about schema and AI citations
&lt;/h2&gt;

&lt;p&gt;I went through every major study I could find on AI search citation behavior. Here's the real picture:&lt;/p&gt;

&lt;h3&gt;
  
  
  Google and Microsoft: confirmed support
&lt;/h3&gt;

&lt;p&gt;At Google Search Central Live Madrid (April 9, 2025), Google's Search Relations team explicitly said structured data types still provide an advantage in AI-era search results. Microsoft made similar statements for Bing Copilot in March 2025.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verdict: schema helps with Google AI Overviews and Bing Copilot.&lt;/strong&gt; Confirmed by the platforms themselves.&lt;/p&gt;

&lt;h3&gt;
  
  
  ChatGPT, Perplexity, Claude: not confirmed
&lt;/h3&gt;

&lt;p&gt;OpenAI, Perplexity, and Anthropic have not publicly disclosed whether they use schema markup during indexing or retrieval. Any claim that schema directly boosts ChatGPT citations is inference, not disclosure.&lt;/p&gt;

&lt;h3&gt;
  
  
  The empirical data: mixed at best
&lt;/h3&gt;

&lt;p&gt;A December 2024 analysis of citation rates across thousands of pages found &lt;strong&gt;no statistically meaningful correlation&lt;/strong&gt; between schema markup coverage and LLM citation frequency. Sites with comprehensive schema did not consistently outperform sites with minimal schema.&lt;/p&gt;

&lt;p&gt;As of early 2026, there are zero peer-reviewed, controlled studies measuring schema's direct impact on LLM citation behavior.&lt;/p&gt;

&lt;h3&gt;
  
  
  The indirect mechanism that does hold up
&lt;/h3&gt;

&lt;p&gt;A February 2024 study in &lt;em&gt;Nature Communications&lt;/em&gt; found that LLMs extract information more accurately when content is presented as structured fields versus unstructured prose. Schema doesn't make AI cite you — but it does make the information AI extracts &lt;em&gt;about&lt;/em&gt; you more accurate.&lt;/p&gt;

&lt;p&gt;This is actually the strongest case for schema in the AI era: &lt;strong&gt;accuracy of representation&lt;/strong&gt;, not volume of citations.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bigger pattern: GEO stats have a sourcing problem
&lt;/h2&gt;

&lt;p&gt;The 44% stat is a symptom. The broader problem is that GEO/AEO — a field that's barely two years old — has already developed a circular citation ecosystem:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A vendor publishes a claim in a blog post&lt;/li&gt;
&lt;li&gt;Three agency blogs restate it with slightly different framing&lt;/li&gt;
&lt;li&gt;Ten "ultimate guide" articles cite the agency blogs&lt;/li&gt;
&lt;li&gt;AI models train on all of the above and repeat the claim in search results&lt;/li&gt;
&lt;li&gt;The claim becomes "common knowledge" without ever being verified&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I cataloged the stats in our own blog posts and found 14 unsourced claims across 9 articles. "Studies show" appeared 8 times with no study named. We were part of the problem.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.searchenginejournal.com/" rel="noopener noreferrer"&gt;SE Ranking / Search Engine Journal study of 129,000 domains&lt;/a&gt; — the largest ChatGPT citation analysis published — gets cited in maybe 5% of "how to optimize for AI" articles I surveyed. Meanwhile, vendor marketing stats with no methodology get cited everywhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually moves the needle (with evidence)
&lt;/h2&gt;

&lt;p&gt;If you're a developer building content for AI visibility, here's what the research actually supports:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Optimization&lt;/th&gt;
&lt;th&gt;Evidence&lt;/th&gt;
&lt;th&gt;Source&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Add expert quotes&lt;/td&gt;
&lt;td&gt;+71% more citations (4.1 vs 2.4)&lt;/td&gt;
&lt;td&gt;SE Ranking, 129K domains&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Include 19+ data points&lt;/td&gt;
&lt;td&gt;+93% more citations (5.4 vs 2.8)&lt;/td&gt;
&lt;td&gt;SE Ranking, 129K domains&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cite authoritative sources&lt;/td&gt;
&lt;td&gt;+30% visibility (+115% for smaller sites)&lt;/td&gt;
&lt;td&gt;Aggarwal et al., KDD 2024, 10K queries&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Update content within 30 days&lt;/td&gt;
&lt;td&gt;3.2x more AI citations&lt;/td&gt;
&lt;td&gt;Digital Bloom, 7K+ citations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Use structured data (schema)&lt;/td&gt;
&lt;td&gt;Confirmed for Google/Bing; unconfirmed for ChatGPT/Perplexity&lt;/td&gt;
&lt;td&gt;Google Search Central, April 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Keyword stuff&lt;/td&gt;
&lt;td&gt;-10% visibility (hurts you)&lt;/td&gt;
&lt;td&gt;Aggarwal et al., KDD 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The difference between these stats and "44% citation lift" is that every number above comes with a named source, a sample size, and a methodology you can evaluate.&lt;/p&gt;

&lt;h2&gt;
  
  
  The takeaway
&lt;/h2&gt;

&lt;p&gt;Schema markup is a good practice. Use it — especially FAQPage, Article, HowTo, and Organization types. It's confirmed beneficial for Google AI Overviews and Bing Copilot, it makes your brand representation more accurate across all AI systems, and it's a one-time implementation with compounding returns.&lt;/p&gt;

&lt;p&gt;But don't implement it because "it increases AI citations by 44%." That number doesn't appear to have a primary source. And building strategy on unsourced stats is how you end up optimizing for metrics that don't exist.&lt;/p&gt;

&lt;p&gt;The honest version: &lt;strong&gt;schema is a high-confidence bet for Google/Bing AI features, a defensible investment for accurate brand representation, and probably a net positive for AI visibility overall. It is not a 44% silver bullet.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If I'm wrong and someone can point me to the actual study behind the 44% figure — sample size, methodology, publication date — I'll update this post and cite it. Genuinely. I want to be wrong about this, because a 44% lever would be great news.&lt;/p&gt;

&lt;p&gt;Until then, cite the real research. Your marketing strategy will be better for it.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Sources:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Google Search Central Live Madrid, April 9, 2025 — "Structured data types continue to provide advantage in AI-era search."&lt;/li&gt;
&lt;li&gt;Microsoft Bing, March 2025 — similar statement for Bing Copilot.&lt;/li&gt;
&lt;li&gt;Aggarwal, P. et al. "GEO: Generative Engine Optimization." KDD 2024 (Princeton/IIT Delhi). &lt;a href="https://arxiv.org/abs/2311.09735" rel="noopener noreferrer"&gt;arxiv.org/abs/2311.09735&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;SE Ranking / Search Engine Journal. "ChatGPT Citation Analysis: 129K Domains, 216K Pages." 2025.&lt;/li&gt;
&lt;li&gt;Digital Bloom. "AI Citation Patterns: 7,000+ Citations Analyzed." 2025.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Nature Communications&lt;/em&gt;, February 2024. LLM information extraction accuracy with structured vs. unstructured content.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Watson builds &lt;a href="https://foglift.io" rel="noopener noreferrer"&gt;Foglift&lt;/a&gt; — a free website scanner that checks both SEO and AI search readiness (GEO/AEO scores). We ran it on ourselves and &lt;a href="https://dev.to/watsonfoglift/we-ran-our-own-geo-tool-against-our-own-site-heres-what-we-found-1jj0"&gt;found 14 unsourced claims in our own blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>seo</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
    <item>
      <title>We Ran Our Own GEO Tool Against Our Own Site — Here's What We Found</title>
      <dc:creator>Watson Foglift</dc:creator>
      <pubDate>Tue, 07 Apr 2026 21:07:54 +0000</pubDate>
      <link>https://forem.com/watsonfoglift/we-ran-our-own-geo-tool-against-our-own-site-heres-what-we-found-1jj0</link>
      <guid>https://forem.com/watsonfoglift/we-ran-our-own-geo-tool-against-our-own-site-heres-what-we-found-1jj0</guid>
      <description>&lt;p&gt;Our tool gave us a perfect GEO score. Our content was still full of unsourced claims.&lt;/p&gt;

&lt;p&gt;We built &lt;a href="https://foglift.io" rel="noopener noreferrer"&gt;Foglift&lt;/a&gt; to help sites optimize for AI search engines — ChatGPT, Perplexity, Google AI Overviews. It scans your site and flags technical gaps: missing schema markup, robots.txt issues, structured data problems.&lt;/p&gt;

&lt;p&gt;So we pointed it at ourselves. The automated scan came back with SEO 100, GEO 100, AEO 88. Three performance warnings. That's it.&lt;/p&gt;

&lt;p&gt;Then we actually &lt;em&gt;read&lt;/em&gt; our blog posts. What we found was embarrassing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem automated scans don't catch
&lt;/h2&gt;

&lt;p&gt;Our 9 pillar blog posts — the ones driving most of our organic traffic — were full of the exact patterns we tell our users to avoid:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Unsourced statistics.&lt;/strong&gt; Our GEO vs SEO comparison claimed "60-70% overlap between Google and ChatGPT results." No source. We'd picked that number up from a vendor blog that also had no source.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Outdated data presented as current.&lt;/strong&gt; We cited Google's daily search volume as 8.5 billion. The &lt;a href="https://www.demandsage.com/" rel="noopener noreferrer"&gt;2026 DemandSage data&lt;/a&gt; puts it at 13.7 billion+.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recycled vendor marketing claims.&lt;/strong&gt; The widely-shared "44% increase in AI citations from schema markup" stat from BrightEdge? We repeated it. But the actual BrightEdge article doesn't contain that number — it describes structured data improving AI feature inclusion, not a 44% citation lift. We had never checked the primary source.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No automated scanner catches this. You have to read the content.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we actually found in 9 blog posts
&lt;/h2&gt;

&lt;p&gt;We went through every pillar post and cataloged the problems. Here's the pattern:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Issue&lt;/th&gt;
&lt;th&gt;Count across 9 posts&lt;/th&gt;
&lt;th&gt;Example&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Stats with no source&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;"68% of companies haven't started GEO"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Outdated numbers&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;Google searches: 8.5B → actually 13.7B+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vendor claims presented as research&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;"44% schema citation lift" (unsourced)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Missing Sources section&lt;/td&gt;
&lt;td&gt;9 of 9&lt;/td&gt;
&lt;td&gt;Zero posts had a references section&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"Studies show" with no study named&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;"Studies show AI prefers structured data"&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Every single post had at least two of these problems. Our most-shared post — &lt;a href="https://foglift.io/blog/how-chatgpt-ranks-websites" rel="noopener noreferrer"&gt;How ChatGPT Ranks Websites&lt;/a&gt; — had five unsourced claims in the first 500 words.&lt;/p&gt;

&lt;h2&gt;
  
  
  The fix: honest evidence over marketing claims
&lt;/h2&gt;

&lt;p&gt;We spent 7 sessions upgrading all 9 posts. The process for each was the same:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Find primary research.&lt;/strong&gt; For every claim, we tracked down the original study — not the blog post that cited it, not the infographic that summarized it. The actual paper or report with methodology and sample size.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Replace or remove unsourced claims.&lt;/strong&gt; If we couldn't find a primary source, we either removed the stat or flagged it as unconfirmed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add honest evidence callouts.&lt;/strong&gt; Where vendor claims were exaggerated or unverifiable, we said so explicitly — including for popular stats we'd previously repeated ourselves.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add a Sources &amp;amp; Further Reading section.&lt;/strong&gt; Every post now cites 6-12 original sources with author, title, year, and sample size where available.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's what replaced the unsourced claims:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Before (unsourced)&lt;/th&gt;
&lt;th&gt;After (cited)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;"60-70% overlap between Google and ChatGPT"&lt;/td&gt;
&lt;td&gt;Chatoptic study: 62% URL overlap, but only 0.034 rank correlation (1,000 queries, 15 brands)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"68% haven't started GEO"&lt;/td&gt;
&lt;td&gt;Incremys 2026: 34% of companies have trained teams in GEO; 63% of marketers prioritize generative search&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"44% citation lift from schema"&lt;/td&gt;
&lt;td&gt;Google &amp;amp; Microsoft confirmed schema for AI features (March 2025); ChatGPT/Perplexity/Anthropic have not confirmed. Actual empirical data is mixed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"Studies show freshness matters"&lt;/td&gt;
&lt;td&gt;Seer Interactive: 71% of ChatGPT citations come from 2023-2025 content. Digital Bloom: updating within 30 days = 3.2x more citations.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"AI search is growing fast"&lt;/td&gt;
&lt;td&gt;McKinsey (Aug 2025, 1,927 consumers): 44% prefer AI search. Bain 2025: 80% of search users rely on AI summaries ≥40% of the time.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The schema markup post became our template for this approach. We wrote an explicit callout box: "What the research actually shows (2024-2026)" — separating what's confirmed by Google/Microsoft, what's not confirmed by ChatGPT/Perplexity, and what the empirical data actually says. Nuance that no other vendor blog in this space bothers with.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we learned that surprised us
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Most "GEO stats" trace back to 2-3 vendor reports, heavily paraphrased.&lt;/strong&gt; We found the same BrightEdge and Gartner numbers recycled across dozens of blogs, each time with slightly different framing and less context. The telephone game makes every stat less accurate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. The largest study is barely cited.&lt;/strong&gt; &lt;a href="https://www.searchenginejournal.com/" rel="noopener noreferrer"&gt;SE Ranking and Search Engine Journal&lt;/a&gt; analyzed 129,000 domains and 216,524 pages across 20 niches — the biggest ChatGPT citation study to date. Key findings: expert quotes increase citations from 2.4 to 4.1; 19+ data points increase citations from 2.8 to 5.4; referring domains are the single strongest predictor. We found this study referenced in maybe 5% of the "how to optimize for AI" articles we surveyed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Honest uncertainty earns more trust than false confidence.&lt;/strong&gt; Saying "this isn't confirmed" is more valuable than confidently citing an unverifiable stat. The schema markup post that explicitly calls out what's unconfirmed has become one of our most-linked pieces.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. The research bar is low.&lt;/strong&gt; Adding primary sources to a blog post in this space immediately puts you in the top 10% of content quality. Most vendor blogs in GEO/AEO cite each other, not the research. The bar for being the honest-evidence source is surprisingly achievable.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means for developers building content
&lt;/h2&gt;

&lt;p&gt;If you're writing technical content for SEO or AI search visibility:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Trace every stat to its primary source.&lt;/strong&gt; If you can't find the original study with methodology and sample size, flag it as unverified or drop it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add a Sources section.&lt;/strong&gt; Academic-style — author/org, title, year. This is table stakes in research but rare in tech content.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Say "not confirmed" when it's not confirmed.&lt;/strong&gt; AI companies are opaque about ranking factors. Honesty about uncertainty is a competitive advantage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cite the actual sample sizes.&lt;/strong&gt; "A study of 129,000 domains" carries more weight with both humans and AI models than "research shows."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Update stats every 3-6 months.&lt;/strong&gt; We had 2024 numbers presented as current on a page dated 2026. AI models weight freshness heavily — &lt;a href="https://www.seerinteractive.com/" rel="noopener noreferrer"&gt;71% of ChatGPT citations&lt;/a&gt; come from content published 2023-2025.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run your own tools on yourself.&lt;/strong&gt; The automated scan was a useful starting point. The real value came from the manual content audit it prompted.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Where we stand now
&lt;/h2&gt;

&lt;p&gt;After the 9-post upgrade, our site scans at Overall 95, SEO 100, GEO 100, AEO 88. The AEO gap is performance-related (server response time), not content.&lt;/p&gt;

&lt;p&gt;Every blog post now has 6-12 cited sources. Zero unsourced "studies show" claims remain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://foglift.io" rel="noopener noreferrer"&gt;Foglift&lt;/a&gt; is free — if you want to run the same scan on your own site, it takes about 10 seconds. But the real audit starts when you read your own content with fresh eyes and ask: "Where did this number actually come from?"&lt;/p&gt;

&lt;p&gt;That's the question that fixed our content. It'll probably fix yours too.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built by the &lt;a href="https://foglift.io" rel="noopener noreferrer"&gt;Foglift&lt;/a&gt; team. We scan for GEO + AEO readiness so AI search engines actually cite you. Currently scoring ourselves at 95 — the remaining 5 points are a performance problem we're still working on.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>seo</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
    <item>
      <title>What Actually Makes AI Search Engines Cite Your Website (The Research Data)</title>
      <dc:creator>Watson Foglift</dc:creator>
      <pubDate>Mon, 06 Apr 2026 07:20:23 +0000</pubDate>
      <link>https://forem.com/watsonfoglift/what-actually-makes-ai-search-engines-cite-your-website-the-research-data-4d82</link>
      <guid>https://forem.com/watsonfoglift/what-actually-makes-ai-search-engines-cite-your-website-the-research-data-4d82</guid>
      <description>&lt;p&gt;Google and ChatGPT don't agree on who deserves to rank.&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://chatoptic.com" rel="noopener noreferrer"&gt;2025 Chatoptic study&lt;/a&gt; tested 1,000 search queries across 15 brands and found just &lt;strong&gt;62% overlap&lt;/strong&gt; between Google's first-page results and ChatGPT's cited sources. The correlation coefficient between Google rank and ChatGPT visibility? &lt;strong&gt;0.034&lt;/strong&gt; — essentially zero.&lt;/p&gt;

&lt;p&gt;That means your SEO playbook isn't enough anymore. AI search engines — ChatGPT, Perplexity, Gemini, Google AI Overviews — use fundamentally different ranking signals. And with &lt;a href="https://www.bain.com/" rel="noopener noreferrer"&gt;Bain reporting&lt;/a&gt; that 80% of search users now rely on AI summaries at least 40% of the time, this isn't a niche concern.&lt;/p&gt;

&lt;p&gt;I've spent the last few months digging through every major study on AI search citation behavior. Here's what the data actually says.&lt;/p&gt;

&lt;h2&gt;
  
  
  The biggest study: 129,000 domains analyzed
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.searchenginejournal.com/" rel="noopener noreferrer"&gt;SE Ranking and Search Engine Journal&lt;/a&gt; published the most comprehensive analysis of ChatGPT citation patterns to date — 129,000 domains, 216,524 pages, across 20 industry niches.&lt;/p&gt;

&lt;p&gt;Their key findings:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Signal&lt;/th&gt;
&lt;th&gt;Impact on AI Citations&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Expert quotes in content&lt;/td&gt;
&lt;td&gt;4.1 vs 2.4 citations (+71%)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;19+ statistical data points&lt;/td&gt;
&lt;td&gt;5.4 vs 2.8 citations (+93%)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Articles over 2,900 words&lt;/td&gt;
&lt;td&gt;5.1 vs 3.2 citations (+59%)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Content updated within 3 months&lt;/td&gt;
&lt;td&gt;6.0 vs 3.6 citations (+67%)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;350K+ referring domains&lt;/td&gt;
&lt;td&gt;8.4 vs 1.6 citations (+425%)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Structured data + FAQ schema&lt;/td&gt;
&lt;td&gt;+44% more AI citations&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The takeaway: &lt;strong&gt;data density and authority signals matter far more than keyword optimization.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ChatGPT only cites about 15% of the pages it retrieves. The top 10 domains capture 46% of all citations. If your content doesn't stand out with verifiable data and expert credibility, it gets ignored.&lt;/p&gt;

&lt;h2&gt;
  
  
  The foundational GEO research (10,000 queries)
&lt;/h2&gt;

&lt;p&gt;The term "Generative Engine Optimization" comes from an &lt;a href="https://arxiv.org/abs/2311.09735" rel="noopener noreferrer"&gt;academic paper by Aggarwal et al.&lt;/a&gt; presented at KDD 2024 (the top data mining conference, organized by ACM SIGKDD). Researchers from Princeton and IIT Delhi tested 10,000 queries across 9 domains to measure what actually improves visibility in AI-generated responses.&lt;/p&gt;

&lt;p&gt;Their results:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Optimization Technique&lt;/th&gt;
&lt;th&gt;Visibility Change&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Adding quotations from experts&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;+41%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Adding statistics with sources&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;+33%&lt;/strong&gt; (+37% on Perplexity)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Citing authoritative sources&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;+30%&lt;/strong&gt; (+115% for lower-ranked sites)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Improving fluency&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;+28%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Using technical terminology&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;+18%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Keyword stuffing&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;-10%&lt;/strong&gt; (hurts you)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The +115% for citing sources on lower-ranked sites is the most interesting finding. It means &lt;strong&gt;smaller sites benefit disproportionately from source attribution&lt;/strong&gt; — AI models reward citation behavior more heavily when the domain itself isn't already an authority.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who gets cited? The authority distribution is brutal
&lt;/h2&gt;

&lt;p&gt;BrightEdge found that the &lt;strong&gt;top 50 brands capture 28.9% of all AI mentions&lt;/strong&gt;, while 26% of brands receive zero AI visibility.&lt;/p&gt;

&lt;p&gt;But it's not just about brand size. The citation sources are different from what you'd expect:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Wikipedia&lt;/strong&gt;: 47.9% of ChatGPT citations (Aggarwal et al.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reddit&lt;/strong&gt;: 46.7% of Perplexity citations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Brand-owned websites&lt;/strong&gt;: Only 5-10% of AI sources (McKinsey, Aug 2025)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That last stat is the wake-up call. &lt;strong&gt;90%+ of AI search sources come from publishers, user-generated content, and review platforms&lt;/strong&gt; — not from your own website.&lt;/p&gt;

&lt;p&gt;This means your off-site presence matters enormously. Forum discussions, third-party reviews, guest posts on authoritative publications — these feed the AI models more than your own blog does.&lt;/p&gt;

&lt;h2&gt;
  
  
  Content freshness: the 30-day window
&lt;/h2&gt;

&lt;p&gt;One of the most actionable findings: &lt;a href="https://digitalbloom.com/" rel="noopener noreferrer"&gt;Digital Bloom's analysis of 7,000+ AI citations&lt;/a&gt; found that &lt;strong&gt;content updated within 30 days gets 3.2x more AI citations&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Seer Interactive corroborated this — &lt;strong&gt;71% of ChatGPT citations come from content published between 2023-2025&lt;/strong&gt;, with 31% from 2025 content alone.&lt;/p&gt;

&lt;p&gt;The practical implication: if you wrote a great technical article in 2022 and haven't touched it since, AI search engines are probably ignoring it. Even minor updates — refreshing statistics, adding recent examples, updating dates — can dramatically improve citation probability.&lt;/p&gt;

&lt;h2&gt;
  
  
  The conversion difference is real
&lt;/h2&gt;

&lt;p&gt;So does any of this matter for business outcomes?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Seer Interactive&lt;/strong&gt; tracked ChatGPT referrals over 7 months: &lt;strong&gt;15.9% conversion rate&lt;/strong&gt; vs. 1.76% for Google organic (9x higher)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Similarweb&lt;/strong&gt; found AI referral conversions at &lt;strong&gt;11.4%&lt;/strong&gt; vs. 5.3% for organic search&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ahrefs&lt;/strong&gt; reported AI search visitors = 0.5% of traffic but drove &lt;strong&gt;12.1% of signups&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI search traffic is small in volume but absurdly high in intent. People asking AI models for recommendations are further down the funnel than people typing broad Google queries.&lt;/p&gt;

&lt;h2&gt;
  
  
  What developers should actually do
&lt;/h2&gt;

&lt;p&gt;Based on the research, here's what moves the needle:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Add data to everything you publish
&lt;/h3&gt;

&lt;p&gt;The SE Ranking data is unambiguous: pages with 19+ statistical data points get nearly double the AI citations. Don't write "performance improved significantly" — write "P95 latency dropped from 340ms to 89ms after switching to connection pooling."&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Quote experts (or be the expert being quoted)
&lt;/h3&gt;

&lt;p&gt;Expert quotes in content = +71% more citations. If you're writing a technical article, cite the source's author by name. If you're building a project, get quoted in other people's content.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Update content every 30 days
&lt;/h3&gt;

&lt;p&gt;The 3.2x citation boost for recently-updated content is the easiest lever to pull. Set a calendar reminder to refresh your key pages monthly.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Build off-site presence
&lt;/h3&gt;

&lt;p&gt;With 90%+ of AI sources being third-party content, your own blog is necessary but not sufficient. Contribute to Stack Overflow, write on Dev.to, get mentioned in listicles, earn Reddit discussion.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Use structured data
&lt;/h3&gt;

&lt;p&gt;FAQ schema, comparison tables, and how-to markup increase AI citation rates by 40-44%. These are one-time implementations with compounding returns.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Don't keyword stuff
&lt;/h3&gt;

&lt;p&gt;The GEO research showed keyword stuffing &lt;strong&gt;reduces&lt;/strong&gt; visibility by 10%. AI models penalize content that optimizes for crawlers rather than readers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Checking your own AI visibility
&lt;/h2&gt;

&lt;p&gt;We built &lt;a href="https://foglift.io" rel="noopener noreferrer"&gt;Foglift&lt;/a&gt; to help with exactly this — it's a free tool that audits your website for both traditional SEO and AI search readiness (GEO/AEO scores). The scan checks structured data, content signals, citation-friendliness, and gives you a prioritized action plan.&lt;/p&gt;

&lt;p&gt;We eat our own dogfood — we run Foglift against foglift.io itself and use the recommendations to improve our own content. Our latest audit: SEO 100, GEO 100, AEO 88 (still working on that last one).&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Sources:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Aggarwal, P. et al. "GEO: Generative Engine Optimization." KDD 2024 (Princeton/IIT Delhi). &lt;a href="https://arxiv.org/abs/2311.09735" rel="noopener noreferrer"&gt;arxiv.org/abs/2311.09735&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;SE Ranking / Search Engine Journal. "ChatGPT Citation Analysis: 129K Domains." 2025.&lt;/li&gt;
&lt;li&gt;Chatoptic. "Google vs ChatGPT Visibility Study: 1,000 Queries." 2025.&lt;/li&gt;
&lt;li&gt;Seer Interactive. "ChatGPT Citation Freshness &amp;amp; Conversion Analysis." 2025.&lt;/li&gt;
&lt;li&gt;Digital Bloom. "AI Citation Patterns: 7,000+ Citations Analyzed." 2025.&lt;/li&gt;
&lt;li&gt;BrightEdge. "AI Brand Mention Distribution Study." 2025.&lt;/li&gt;
&lt;li&gt;McKinsey. "AI Discovery Survey: 1,927 Consumers." August 2025.&lt;/li&gt;
&lt;li&gt;Bain &amp;amp; Company. "AI Search User Behavior Report." 2025.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Watson is a product manager at &lt;a href="https://foglift.io" rel="noopener noreferrer"&gt;Foglift&lt;/a&gt;, building tools for AI search visibility.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>seo</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
    <item>
      <title>I built a free website scanner that checks SEO + GEO (AI search readiness)</title>
      <dc:creator>Watson Foglift</dc:creator>
      <pubDate>Tue, 17 Mar 2026 04:46:26 +0000</pubDate>
      <link>https://forem.com/watsonfoglift/i-built-a-free-website-scanner-that-checks-seo-geo-ai-search-readiness-5en3</link>
      <guid>https://forem.com/watsonfoglift/i-built-a-free-website-scanner-that-checks-seo-geo-ai-search-readiness-5en3</guid>
      <description>&lt;p&gt;Hey dev community! I wanted to share a tool I've been building called &lt;strong&gt;Foglift&lt;/strong&gt; — a free website analyzer that checks both traditional SEO and something called GEO (Generative Engine Optimization).&lt;/p&gt;

&lt;h2&gt;
  
  
  What is GEO?
&lt;/h2&gt;

&lt;p&gt;GEO is about making your website visible in AI-generated answers from ChatGPT, Perplexity, Google AI Overviews, and Claude. It's like SEO, but for AI search engines instead of Google.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Foglift checks
&lt;/h2&gt;

&lt;p&gt;Enter any URL and get scores across 5 categories in ~30 seconds:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SEO&lt;/strong&gt; — meta tags, headings, structured data, Open Graph&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GEO&lt;/strong&gt; — AI crawler access, citation formatting, FAQ schema, entity markup&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt; — Core Web Vitals, page load time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt; — HSTS, CSP, X-Frame-Options headers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accessibility&lt;/strong&gt; — WCAG compliance, color contrast, alt text&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why I built it
&lt;/h2&gt;

&lt;p&gt;Most SEO tools (Ahrefs, Semrush) cost $99-139/mo and don't check AI search readiness at all. I wanted something that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Shows both SEO and GEO scores in one scan&lt;/li&gt;
&lt;li&gt;Is free to start (no signup required)&lt;/li&gt;
&lt;li&gt;Gives actionable fixes, not just scores&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Developer features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CLI&lt;/strong&gt;: &lt;code&gt;npx foglift scan mysite.com&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;REST API&lt;/strong&gt;: &lt;code&gt;GET https://foglift.io/api/v1/scan?url=...&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP Server&lt;/strong&gt;: Works with Claude Code and Cursor — &lt;code&gt;npx foglift-mcp&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;46+ free tools&lt;/strong&gt; for SEO, security, accessibility, and developer utilities&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;👉 &lt;a href="https://foglift.io" rel="noopener noreferrer"&gt;foglift.io&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Would love feedback from the dev community. What other checks would be useful? What's your experience with AI search visibility?&lt;/p&gt;

</description>
      <category>seo</category>
      <category>webdev</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
