<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: supal vasani</title>
    <description>The latest articles on Forem by supal vasani (@supal_vasani_ae3ff820194a).</description>
    <link>https://forem.com/supal_vasani_ae3ff820194a</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/supal_vasani_ae3ff820194a"/>
    <language>en</language>
    <item>
      <title>Encapsulation is an agreement, not a force field. To an LLM, visibility = permission. My new post on Boundary Blindness explores why Transformers ignore file directories and access modifiers to create "Global Scope" illusions.</title>
      <dc:creator>supal vasani</dc:creator>
      <pubDate>Wed, 31 Dec 2025 06:33:56 +0000</pubDate>
      <link>https://forem.com/supal_vasani_ae3ff820194a/encapsulation-is-an-agreement-not-a-force-field-to-an-llm-visibility-permission-my-new-post-49c7</link>
      <guid>https://forem.com/supal_vasani_ae3ff820194a/encapsulation-is-an-agreement-not-a-force-field-to-an-llm-visibility-permission-my-new-post-49c7</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/supal_vasani_ae3ff820194a" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3686953%2F0afb11b3-6ac3-4f5d-897a-95a32434170e.jpg" alt="supal_vasani_ae3ff820194a"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/supal_vasani_ae3ff820194a/boundary-blindness-why-llms-struggle-with-encapsulation-and-scope-29m" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Boundary Blindness: Why LLMs Struggle with Encapsulation and Scope&lt;/h2&gt;
      &lt;h3&gt;supal vasani ・ Dec 31&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#architecture&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#automation&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#softwaredevelopment&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>ai</category>
      <category>architecture</category>
      <category>automation</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Boundary Blindness: Why LLMs Struggle with Encapsulation and Scope</title>
      <dc:creator>supal vasani</dc:creator>
      <pubDate>Wed, 31 Dec 2025 06:25:50 +0000</pubDate>
      <link>https://forem.com/supal_vasani_ae3ff820194a/boundary-blindness-why-llms-struggle-with-encapsulation-and-scope-29m</link>
      <guid>https://forem.com/supal_vasani_ae3ff820194a/boundary-blindness-why-llms-struggle-with-encapsulation-and-scope-29m</guid>
      <description>&lt;h2&gt;
  
  
  Failure Modes of LLM-Assisted Codebases(2/n)
&lt;/h2&gt;

&lt;p&gt;In physical architecture, walls separate rooms, dividing the functionality of a house.&lt;br&gt;
Similarly, in a codebase, we have &lt;strong&gt;logical distance&lt;/strong&gt;. A file in &lt;code&gt;src/utils&lt;/code&gt; is logically far from a file in &lt;code&gt;src/features/billing&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We seperate concerns so that  authentication logic doesn’t leak into payments and UI components don’t reason about database state. Encapsulation is how we prevent complexity from spreading.&lt;/p&gt;

&lt;p&gt;But when you feed your whole codebase into a long-context LLM, it loses track of &lt;strong&gt;where logic actually exists&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  FOR EXAMPLE
&lt;/h2&gt;

&lt;p&gt;Imagine you provide an LLM with two files: &lt;code&gt;AccountService.ts&lt;/code&gt; and &lt;code&gt;ProfilePage.tsx&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;AccountService.ts&lt;/code&gt; contains a private, fragile method &lt;code&gt;calculateInterestInternal()&lt;/code&gt; and a private parameter&lt;code&gt;_balance&lt;/code&gt;. It also exposes a public API called &lt;code&gt;getPublicBalance()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ProfilePage.tsx&lt;/code&gt; is a UI component that should only display the final value.&lt;/p&gt;

&lt;p&gt;you give the prompt ,“Show the user’s balance on the profile page and add an ‘Interest Earned’ label.” The LLM sees the&lt;code&gt;_balance&lt;/code&gt; variable and the &lt;code&gt;calculateInterestInternal()&lt;/code&gt; logic. Instead of calling the public API, it re-derives or directly calls the internal interest logic inside the UI and bypasses the public API entirely. The result is nothing crashes, code also passes the linters and UI depends upon the the internal finance logic.&lt;/p&gt;

&lt;p&gt;The real problem arrives when you refactor the interest calculation, backend is correct but UI will give wrong output as it is using different logic, this is boundary blindness, this is not a bug its architecture damage.&lt;/p&gt;

&lt;h2&gt;
  
  
  The “Global Scope” Illusion
&lt;/h2&gt;

&lt;p&gt;In a Transformer, directories, files, and modules do not exist as constraints. There is no tree or graph of ownership — only a &lt;strong&gt;flat probability surface&lt;/strong&gt;. LLMs use &lt;strong&gt;self-attention&lt;/strong&gt;. Each token in the context window calculates a relationship with every other token and asks &lt;strong&gt;&lt;em&gt;“Which other tokens help me predict this token?”&lt;/em&gt;&lt;/strong&gt; But It does not ask &lt;strong&gt;&lt;em&gt;Does another method already exist? Is this allowed? Will this violate architecture?&lt;/em&gt;&lt;/strong&gt; Problems arise when sensitive or internal information is used directly instead of public methods or APIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb10p4l5rrcoscqix89ny.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb10p4l5rrcoscqix89ny.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Visibility Becomes Permission
&lt;/h2&gt;

&lt;p&gt;In transformer we have &lt;strong&gt;global vector space&lt;/strong&gt;, In traditional engineering, we hide information.&lt;br&gt;
&lt;strong&gt;The "Agreement":&lt;/strong&gt; We hide implementation details behind an Interface (API). We trust that the consumer of the API cannot see how things work behind the scene.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The LLM Violation:&lt;/strong&gt; When you provide the implementation file in the context window, the LLM "sees through" the interface. It sees the raw logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the Model Chooses Internals
&lt;/h2&gt;

&lt;p&gt;LLMs choose the path of least resistance. If a private function like &lt;code&gt;calculateInterestInternal()&lt;/code&gt; is more semantically relevant to the prompt — or closer than the public API &lt;code&gt;getPublicBalance()&lt;/code&gt; — the model skips the validation layer because it is closer to the desired outcome in vector space.&lt;/p&gt;

&lt;p&gt;The model optimizes for &lt;em&gt;semantic proximity&lt;/em&gt;, not architectural intent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Failure Modes:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1) Identity Confusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you have a &lt;code&gt;UserValidator&lt;/code&gt; in Auth and a &lt;code&gt;UserValidator&lt;/code&gt; in Shipping, the model sees the naming pattern and merges them. It produces a hybrid validator where your login flow suddenly checks shipping ZIP code constraints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2) Internal State Leakage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;UI or controller code uses internal state instead of public APIs. This leads to loss of a single source of truth, desynchronized behavior, and refactors that never finish.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3) Logic Forking&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The model reimplements logic instead of calling it. Because the model does not understand “single source of truth,” it assumes the logic exists somewhere and recreates a similar version. Now there are two implementations, and neither stays correct forever.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4) Layer Inversion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Because the model sees &lt;strong&gt;capability&lt;/strong&gt;, it makes the UI validate business rules, services create new models, and tests encode production rules. The system runs, but responsibilities are inverted, leading to classic architectural rot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5) Contextual Over-Pollution&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Large (200k+) context windows remove locality. The model forgets which file it is editing and which layer it is in, and it adds unnecessary utilities simply because they are available.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Reviewers Miss This
&lt;/h2&gt;

&lt;p&gt;Reviewers usually check:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does it work?&lt;/li&gt;
&lt;li&gt;Does it read well?&lt;/li&gt;
&lt;li&gt;Does it pass tests?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They don’t check:&lt;/p&gt;

&lt;p&gt;Where was this logic supposed to live?&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If &lt;strong&gt;Temporal Collapse&lt;/strong&gt; (Post 1) makes the AI forget when code was written, &lt;strong&gt;Boundary Blindness&lt;/strong&gt; makes it forget where code belongs.&lt;/p&gt;

&lt;p&gt;This is not hallucination, lack of knowledge, or misunderstanding syntax.&lt;br&gt;
It is &lt;strong&gt;correct reasoning applied in the wrong place&lt;/strong&gt;, which leads directly to invariants becoming implicit.&lt;/p&gt;

&lt;p&gt;In the next part of this series, we will look at &lt;strong&gt;Post 3: Invariant Decay&lt;/strong&gt;, and how the loss of boundaries leads to the quiet erosion of a system’s core rules.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Follow along for the rest of the series on engineering-grade AI development.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>automation</category>
      <category>softwaredevelopment</category>
    </item>
  </channel>
</rss>
