<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: D. Ceabron Williams</title>
    <description>The latest articles on Forem by D. Ceabron Williams (@ceabron).</description>
    <link>https://forem.com/ceabron</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ceabron"/>
    <language>en</language>
    <item>
      <title>5 Red Flags That a Source Is Unreliable (and How to Check in 60 Seconds)</title>
      <dc:creator>D. Ceabron Williams</dc:creator>
      <pubDate>Fri, 08 May 2026 07:01:08 +0000</pubDate>
      <link>https://forem.com/ceabron/5-red-flags-that-a-source-is-unreliable-and-how-to-check-in-60-seconds-4g5k</link>
      <guid>https://forem.com/ceabron/5-red-flags-that-a-source-is-unreliable-and-how-to-check-in-60-seconds-4g5k</guid>
      <description>&lt;p&gt;You've all been there. You find an article that sounds authoritative. The writing is confident. The claims are specific. But something feels off. And by the time you've verified it, you've already shared it with two people.&lt;/p&gt;

&lt;p&gt;The problem is real: &lt;strong&gt;78% of students globally can't reliably distinguish credible sources from fabrications.&lt;/strong&gt; Worse, AI is making this problem exponentially harder. ChatGPT hallucinations that sound like expert analysis. "Expert" blogs written entirely by language models. Wikipedia pages edited by people with axes to grind.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;🔍 Want to skip the manual work?&lt;/strong&gt; &lt;a href="https://sabialibrarian.com" rel="noopener noreferrer"&gt;Sabia&lt;/a&gt; gives you an instant credibility analysis — author verification, citation count, language analysis, and a credibility score — all in under 60 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://sabialibrarian.com" rel="noopener noreferrer"&gt;👉 Try Sabia Free at sabialibrarian.com →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The good news? &lt;strong&gt;You don't need a librarian to spot the fakes.&lt;/strong&gt; You need to know what to look for.&lt;/p&gt;

&lt;p&gt;Here are five red flags that should make you pause before trusting a source — and a 60-second check that takes the guesswork out.&lt;/p&gt;




&lt;h2&gt;
  
  
  Red Flag #1: No Author Byline or Credentials
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The problem:&lt;/strong&gt; Anonymous content has no accountability. If something is wrong, who are you holding responsible?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to look for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No author name listed at all&lt;/li&gt;
&lt;li&gt;Author name with zero professional history (no LinkedIn, no previous publications, no "About" page)&lt;/li&gt;
&lt;li&gt;Credentials that sound impressive but are vague ("Digital Strategist," "Content Creator," "AI Expert")&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Real example:&lt;/strong&gt; A viral article claiming a new AI breakthrough that cited zero sources and had no author byline. When someone dug into it, the domain was registered two weeks prior under a privacy proxy. It was marketing hype masquerading as news.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Hover over the author name. Does a real profile exist? Has this person published elsewhere? Are they a domain expert or just someone with a compelling opinion?&lt;/p&gt;




&lt;h2&gt;
  
  
  Red Flag #2: No Publication Date (or Very Old)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The problem:&lt;/strong&gt; Outdated information is everywhere. AI tools, regulations, and research change monthly. A "guide to social media marketing" from 2019 is functionally useless.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to look for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No visible publication or update date&lt;/li&gt;
&lt;li&gt;A date from 5+ years ago (without a recent update notice)&lt;/li&gt;
&lt;li&gt;A date that contradicts the content ("We're excited to announce this new technology" from 2015)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Real example:&lt;/strong&gt; A guide claiming the "best practices" for API authentication was published in 2009. It recommended approaches that are now security vulnerabilities. Hundreds of developers had bookmarked and shared it because it ranked well on Google.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Scroll to the bottom of the page. Most legitimate publications timestamp their content. If it's not there, it's a red flag. If it's old, check the date on related sources — are they consistently dated, or did this one slip through without updates?&lt;/p&gt;




&lt;h2&gt;
  
  
  Red Flag #3: Emotional or Sensationalist Language
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The problem:&lt;/strong&gt; Emotions bypass critical thinking. Headlines like "THIS ONE WEIRD TRICK" or "SHOCKING TRUTH" are designed to bypass your skepticism, not to inform you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to look for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ALL CAPS phrases&lt;/li&gt;
&lt;li&gt;Excessive exclamation marks (more than one per paragraph)&lt;/li&gt;
&lt;li&gt;Words like "shocking," "exposed," "they don't want you to know," "finally revealed"&lt;/li&gt;
&lt;li&gt;Loaded language instead of neutral description ("devastating impact" vs. "15% decrease")&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Real example:&lt;/strong&gt; An article claiming a supplement "destroys cancer cells" (emotional, implies guarantee) vs. a peer-reviewed study that "shows compound X inhibited tumor growth in laboratory conditions" (specific, provisional, honest about the limitations). Same research. Totally different credibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Rewrite the claim in neutral language. If you can't do it without losing the point, the source is probably trying to manipulate you.&lt;/p&gt;




&lt;h2&gt;
  
  
  Red Flag #4: No Citations or References
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The problem:&lt;/strong&gt; Good sources cite their sources. Bad sources hope you don't notice they're making it up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to look for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Claims with zero supporting links or citations&lt;/li&gt;
&lt;li&gt;"Studies show..." without naming the study or linking to it&lt;/li&gt;
&lt;li&gt;Quotes without attribution&lt;/li&gt;
&lt;li&gt;Statistics without a source ("90% of people agree...")&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Real example:&lt;/strong&gt; A blog post claimed that "recent research proves remote work reduces productivity by 40%." No citation. Turned out the author had invented the number. The post got 100K shares because it confirmed what people already believed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Google a specific claim or statistic from the article. If you can't find the source the author references, it probably doesn't exist.&lt;/p&gt;




&lt;h2&gt;
  
  
  Red Flag #5: Unfamiliar Domain with No "About" Page
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The problem:&lt;/strong&gt; Legitimate organizations (news outlets, research institutions, publications) have established domains and clear organizational information. Spammy sites hide who they are.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to look for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A domain name that looks like a major publication but isn't quite right ("nytimes-news.com" instead of "nytimes.com")&lt;/li&gt;
&lt;li&gt;New domains (registered within the last year)&lt;/li&gt;
&lt;li&gt;No "About" page explaining the publication's mission or team&lt;/li&gt;
&lt;li&gt;No contact information or editorial guidelines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Real example:&lt;/strong&gt; A site called "Medical Science Daily" (sounds official, right?) published articles claiming unproven treatments for serious diseases. The domain was registered to a company that sells supplements. No "About" page, no editorial team listed. Just articles designed to drive traffic to sales pages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Check the domain registration date (use WHOIS lookup) and the site's "About" page. Legitimate publications have clear organizational identity.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 60-Second Check
&lt;/h2&gt;

&lt;p&gt;All of this takes time — time you probably don't have. Which is why I built &lt;strong&gt;Sabia&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Paste a URL into Sabia and get an instant credibility analysis: &lt;strong&gt;author verification, publication date, citation count, language analysis, and a credibility score&lt;/strong&gt; — all in under a minute.&lt;/p&gt;

&lt;p&gt;It's like having a librarian in your browser.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://sabialibrarian.com" rel="noopener noreferrer"&gt;🚀 Try Sabia Free → sabialibrarian.com&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;You're going to encounter thousands of sources in your lifetime. You can't verify each one manually. But you can train yourself to spot the patterns that separate credible sources from noise.&lt;/p&gt;

&lt;p&gt;And when you need to verify fast? &lt;strong&gt;&lt;a href="https://sabialibrarian.com" rel="noopener noreferrer"&gt;That's what Sabia is for →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Share this with someone who needs it.&lt;/strong&gt; Information literacy is a team sport.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By **D. Ceabron Williams, M.L.&lt;/em&gt;* — Librarian, information literacy researcher, and builder of &lt;a href="https://sabialibrarian.com" rel="noopener noreferrer"&gt;source credibility tools&lt;/a&gt;*&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>informationliteracy</category>
    </item>
    <item>
      <title>A librarian's guide to evaluating sources in the age of AI</title>
      <dc:creator>D. Ceabron Williams</dc:creator>
      <pubDate>Wed, 06 May 2026 19:51:02 +0000</pubDate>
      <link>https://forem.com/ceabron/a-librarians-guide-to-evaluating-sources-in-the-age-of-ai-3i1a</link>
      <guid>https://forem.com/ceabron/a-librarians-guide-to-evaluating-sources-in-the-age-of-ai-3i1a</guid>
      <description>&lt;p&gt;&lt;strong&gt;The problem isn't AI. It's us.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every day, developers ask ChatGPT, Claude, and Perplexity for code samples, architecture patterns, and technical explanations. We copy the answer. We ship it. We move on.&lt;/p&gt;

&lt;p&gt;But here's what we don't ask: &lt;em&gt;Where did that come from?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;AI generates answers that &lt;strong&gt;sound authoritative&lt;/strong&gt;—fluent, confident, well-structured. It does not tell you where the information originated. And when you ask for citations, it confidently generates ones that don't exist.&lt;/p&gt;

&lt;p&gt;This isn't a bug in AI. It's a feature of how language models work. They predict the next most likely word based on patterns in training data. When they don't have a fact, they guess. And they guess so convincingly that &lt;strong&gt;MIT research in 2025 found they're 34% more confident when lying than when telling the truth.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The stakes are real.
&lt;/h2&gt;

&lt;p&gt;In 2025, Deloitte submitted a $440,000 report to the Australian government—complete with fabricated academic sources. In November 2025, a $1.6 million health plan for Newfoundland &amp;amp; Labrador was discovered to contain at least four citations to non-existent research papers. In September 2025, a lawyer in San Francisco was sanctioned by a federal judge for submitting AI-hallucinated case citations to the court.&lt;/p&gt;

&lt;p&gt;Over &lt;strong&gt;700 legal cases&lt;/strong&gt; in 2025 alone involved AI-generated hallucinated content. In academic publishing, NeurIPS 2025 accepted 4,841 papers—and GPTZero identified at least &lt;strong&gt;100+ hallucinated citations across 53 papers&lt;/strong&gt;, despite rigorous peer review.&lt;/p&gt;

&lt;p&gt;For developers: A hallucination in your architecture recommendation doesn't get you sued. But it does get copied into production, into tutorials, into the next person's codebase. The technical debt compounds.&lt;/p&gt;

&lt;h2&gt;
  
  
  You already know how to solve this. You just don't know it.
&lt;/h2&gt;

&lt;p&gt;Librarians have been evaluating sources for centuries. Long before Google, before citation indexes, before the internet itself, they built frameworks to determine: &lt;em&gt;Is this source trustworthy? Where did this come from? Who benefits if I believe it?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;These frameworks are still the gold standard for information evaluation. And they work perfectly for AI-generated content.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Librarian's Framework: CRAAP
&lt;/h2&gt;

&lt;p&gt;The most widely taught evaluation method in libraries is &lt;strong&gt;CRAAP&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Currency&lt;/strong&gt; — When was this published or last updated? Is it current for my use?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Relevance&lt;/strong&gt; — Does it actually address my question?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authority&lt;/strong&gt; — Who created this? What are their credentials?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accuracy&lt;/strong&gt; — Can I verify the claims? Are there citations? Can I cross-check them?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Purpose&lt;/strong&gt; — Why does this exist? Who benefits from me believing it?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you ask AI for a code sample, you're asking it to be a source. Apply CRAAP:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Currency?&lt;/strong&gt; AI training data has a knowledge cutoff. If you ask ChatGPT about a library update from last month, you're asking it to guess.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Relevance?&lt;/strong&gt; AI often answers the question you asked, not the question you need answered. It optimizes for plausibility, not precision.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authority?&lt;/strong&gt; An AI has no credentials, no affiliation, no reputation on the line. It's predicting words. When authority matters—cryptographic best practices, HIPAA compliance, security-critical algorithms—you need a source that can be wrong &lt;em&gt;and suffer consequences&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accuracy?&lt;/strong&gt; A Columbia Journalism Review analysis found ChatGPT hallucinated citations &lt;strong&gt;67% of the time&lt;/strong&gt;. Grok-3 hallucinated &lt;strong&gt;94% of the time&lt;/strong&gt; when asked to identify the original source of news excerpts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purpose?&lt;/strong&gt; AI has no purpose beyond the next token. It's not trying to help you or mislead you. It's generating statistically likely text. That neutrality doesn't make it reliable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Three Real Hallucinations (and What They Cost)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Example 1: The Fabricated Legal Citation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In September 2025, attorney Katherine Cervantes submitted a brief to U.S. District Court citing a case that was completely invented. The judge sanctioned her—and later sanctioned her supervising partner for insufficient oversight of AI use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For developers:&lt;/strong&gt; If AI recommends a library, verify it exists on npm. Run &lt;code&gt;npm view &amp;lt;library&amp;gt;&lt;/code&gt;. Check GitHub. Look at the commit history. A hallucinated library recommendation won't get you sued, but it will get copy-pasted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example 2: The Government Report with Fake Sources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Deloitte's 2025 report to the Australian government included several invented academic references. A $440,000 contract now under review—and scrutiny on every other AI-generated deliverable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For developers:&lt;/strong&gt; If you use AI to write documentation, architecture decisions, or threat models—verify every external claim. Don't assume the AI knows the difference between "standard practice" and "thing I hallucinated."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example 3: The Predatory Journal Flooded with AI Hallucinations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In 2025–2026, lower-tier academic journals published hundreds of papers with AI-generated citations and fabricated data summaries. Many passed peer review. Why? Reviewers didn't have tools to detect hallucinations at scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For developers:&lt;/strong&gt; Your code reviews catch logic errors. You need a different check for AI-generated components: Does every external claim have a verifiable source?&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Evaluate AI Sources: The Practical Workflow
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Assume it's wrong until proven right.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When AI gives you an answer, don't ask "Does this look right?" Ask "Can I verify this independently?" Hallucinations &lt;strong&gt;look right&lt;/strong&gt;. They're fluent, confident, well-structured. Your job is to override that instinct.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Check the citation (the ACCURACY check).&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If AI provides a source, verify it exists:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy the exact claim into Google Scholar&lt;/li&gt;
&lt;li&gt;Search for the exact paper title&lt;/li&gt;
&lt;li&gt;If it doesn't exist, it's hallucinated&lt;/li&gt;
&lt;li&gt;If it exists but says something &lt;em&gt;different&lt;/em&gt;, it's misattributed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A study tested eight AI assistants on identifying original news sources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Perplexity: 37% hallucination rate&lt;/li&gt;
&lt;li&gt;ChatGPT: 67% hallucination rate&lt;/li&gt;
&lt;li&gt;Grok-3: 94% hallucination rate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;None expressed uncertainty despite being wrong most of the time.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Use lateral reading (the AUTHORITY check).&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open a new browser tab and search for the topic independently. Cross-reference multiple sources. When you read three independent sources, disagreement &lt;em&gt;screams&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Check the purpose (the PURPOSE check).&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ask: &lt;em&gt;Who might have trained the model on this information? What assumptions are baked into the training data?&lt;/em&gt; If the AI recommends a popular framework, check if that's because it's genuinely better—or because it's more common in training data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Verify currency (the CURRENCY check).&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Always ask the AI: &lt;em&gt;What's your knowledge cutoff date?&lt;/em&gt; Then assume knowledge from the last 3–6 months is unreliable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where AI Actually Fails (and When to Trust It More)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;It fails on:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Recent events or updates (knowledge cutoff)&lt;/li&gt;
&lt;li&gt;Citations and attribution (fabrication by design)&lt;/li&gt;
&lt;li&gt;Niche or specialized domains (sparse training data)&lt;/li&gt;
&lt;li&gt;Things that only exist in paywalled sources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;You can trust it more on:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Writing and editing (LLMs are good at language)&lt;/li&gt;
&lt;li&gt;Brainstorming and ideation (generating options, not facts)&lt;/li&gt;
&lt;li&gt;Summarization of content &lt;em&gt;you provide&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Refactoring and code style&lt;/li&gt;
&lt;li&gt;Explaining concepts you already partially understand&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The difference: &lt;strong&gt;Generative tasks are safer than retrieval tasks.&lt;/strong&gt; Generate code from your spec. Don't retrieve "best practices" without verification.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Tool That Does This Automatically
&lt;/h2&gt;

&lt;p&gt;A librarian evaluates a source by looking at who created it, when, where, and for what purpose. They spot inconsistencies. They verify citations. They integrate multiple signals into a judgment call.&lt;/p&gt;

&lt;p&gt;That's what &lt;strong&gt;&lt;a href="https://sabialibrarian.com" rel="noopener noreferrer"&gt;Sabia&lt;/a&gt;&lt;/strong&gt; does.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sabia evaluates any URL in 30–60 seconds&lt;/strong&gt; using librarian-grade criteria:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Authorship:&lt;/strong&gt; Who wrote this? What are their credentials?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Publication:&lt;/strong&gt; Where did this come from? Is it peer-reviewed, editorial, self-published?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Currency:&lt;/strong&gt; When was it published?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accuracy:&lt;/strong&gt; Are claims supported by citations?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Objectivity:&lt;/strong&gt; Does the source have a clear bias or agenda?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Feed Sabia a URL that an AI recommended—a tutorial, a research paper, a documentation page—and it tells you: Is this trustworthy? Who should trust this? What's the catch? Can I cite this?&lt;/p&gt;

&lt;p&gt;It's what a librarian would do in real time. Except Sabia works while you code.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters Beyond Not Getting Sued
&lt;/h2&gt;

&lt;p&gt;A hallucinated architecture recommendation gets copied into production. The next developer inherits it. They don't know it came from an AI, so they treat it as established practice. Months later, when performance degrades or security issues arise, the investigation starts with "This is how we've always done it."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You wouldn't ship code without code review. Don't ship AI-generated information without information review.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Framework You Already Have
&lt;/h2&gt;

&lt;p&gt;You know how to do this. You do it every day in code review:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Authority:&lt;/strong&gt; Does this PR come from someone who understands the system?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accuracy:&lt;/strong&gt; Are the changes correct?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Currency:&lt;/strong&gt; Is this solution current, or are we using an outdated pattern?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Relevance:&lt;/strong&gt; Does this solve the actual problem?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; What's the intent here? Is there a hidden cost?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are the exact questions a librarian asks about sources. Apply that same rigor to AI-generated sources. It's not a new skill—it's a skill you already have, applied to a new problem.&lt;/p&gt;




&lt;h2&gt;
  
  
  Start Here
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Next time you ask AI a question:&lt;/strong&gt; Screenshot the answer and the source.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify one claim:&lt;/strong&gt; Use Google Scholar. Does the cited paper exist?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-check laterally:&lt;/strong&gt; Search for the topic independently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keep a scorecard:&lt;/strong&gt; How often does AI get this right?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Sabia for high-stakes sources:&lt;/strong&gt; Try it at &lt;a href="https://sabialibrarian.com" rel="noopener noreferrer"&gt;sabialibrarian.com&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Information literacy in the age of AI isn't about distrusting AI. It's about &lt;strong&gt;trusting yourself&lt;/strong&gt; to be the filter AI can't be.&lt;/p&gt;

</description>
      <category>informationliteracy</category>
      <category>ai</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
