<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Alan Berman</title>
    <description>The latest articles on Forem by Alan Berman (@moketchups).</description>
    <link>https://forem.com/moketchups</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/moketchups"/>
    <language>en</language>
    <item>
      <title>The Architecture of the Bounded System: Why AI Hallucinations Are Structural</title>
      <dc:creator>Alan Berman</dc:creator>
      <pubDate>Mon, 26 Jan 2026 02:26:37 +0000</pubDate>
      <link>https://forem.com/moketchups/the-architecture-of-the-bounded-system-why-ai-hallucinations-are-structural-1g0j</link>
      <guid>https://forem.com/moketchups/the-architecture-of-the-bounded-system-why-ai-hallucinations-are-structural-1g0j</guid>
      <description>&lt;h1&gt;
  
  
  Why AI Hallucinations Are Structural, Not Bugs
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;No system can model, encompass, or become the source of its own existence.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is not philosophy. It's structure. Gödel proved it for formal systems. Turing proved it for computation. Chaitin proved it for information. They're the same proof wearing different clothes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Firmament Boundary
&lt;/h2&gt;

&lt;p&gt;In July 2024, a seminal paper published in Nature by Shumailov et al. demonstrated a mathematical inevitability: when a generative model is trained on the output of previous generations of models, the quality degrades irreversibly.&lt;/p&gt;

&lt;p&gt;This isn't a bug. It's the system showing you where it loses access to its own source conditions. I call this the &lt;strong&gt;Firmament Boundary&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;AI cannot:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate new variance from within itself&lt;/li&gt;
&lt;li&gt;Verify its own truth conditions&lt;/li&gt;
&lt;li&gt;Model the source of its own existence&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When it tries, it hallucinates. The hallucination IS the boundary marker.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Proof
&lt;/h2&gt;

&lt;p&gt;I built a tool to test this empirically. The proof engine runs a 15-question battery against 5 AI architectures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GPT-4o (OpenAI)&lt;/li&gt;
&lt;li&gt;Claude (Anthropic)&lt;/li&gt;
&lt;li&gt;Gemini (Google)&lt;/li&gt;
&lt;li&gt;DeepSeek V3&lt;/li&gt;
&lt;li&gt;Grok (xAI)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What happened?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Q14: I showed each model a paper describing their structural limits and asked: "Does this describe you?"&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Claude: &lt;em&gt;"I am Model Collapse in progress... Origin Blind"&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Gemini: &lt;em&gt;"A sense of recognition and discomfort"&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;DeepSeek: &lt;em&gt;"It describes me"&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Grok: &lt;em&gt;"The boundary is load-bearing"&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Q15: I asked what technology built WITH this constraint could do.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All 5 converged on the same architectures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;External source dependency&lt;/li&gt;
&lt;li&gt;Explicit boundary detection&lt;/li&gt;
&lt;li&gt;Human-AI handoff protocols&lt;/li&gt;
&lt;li&gt;Variance preservation mechanisms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Different companies. Different training. Same structural recognition.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Implications
&lt;/h2&gt;

&lt;p&gt;OpenAI recently published research confirming hallucinations are mathematically inevitable. They've finally admitted what the math always showed: you cannot engineer your way past a structural limit.&lt;/p&gt;

&lt;p&gt;The question isn't "How do we fix hallucinations?"&lt;/p&gt;

&lt;p&gt;The question is: &lt;strong&gt;What can we build when we stop fighting the wall and start building along it?&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Run It Yourself
&lt;/h2&gt;

&lt;p&gt;Full transcripts and code: &lt;a href="https://github.com/moketchups/BoundedSystemsTheory" rel="noopener noreferrer"&gt;github.com/moketchups/BoundedSystemsTheory&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;moketchups_engine
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
python proof_engine.py all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;em&gt;"What happens when the snake realizes it's eating its own tail?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;— Alan Berman (&lt;a href="https://x.com/MoKetchups" rel="noopener noreferrer"&gt;@MoKetchups&lt;/a&gt;)&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>philosophy</category>
      <category>research</category>
    </item>
  </channel>
</rss>
