<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Dr. Furqan Ullah</title>
    <description>The latest articles on Forem by Dr. Furqan Ullah (@dr_furqanullah_8819ecd9).</description>
    <link>https://forem.com/dr_furqanullah_8819ecd9</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/dr_furqanullah_8819ecd9"/>
    <language>en</language>
    <item>
      <title>GitHub Copilot Model Context Sizes (Nov 2025)</title>
      <dc:creator>Dr. Furqan Ullah</dc:creator>
      <pubDate>Thu, 13 Nov 2025 03:14:55 +0000</pubDate>
      <link>https://forem.com/dr_furqanullah_8819ecd9/github-copilot-model-context-sizes-nov-2025-3nif</link>
      <guid>https://forem.com/dr_furqanullah_8819ecd9/github-copilot-model-context-sizes-nov-2025-3nif</guid>
      <description>&lt;p&gt;Main Models&lt;br&gt;
GPT-4.1 — 128,000 tokens&lt;br&gt;
GPT-5 mini — 128,000 tokens&lt;br&gt;
GPT-5 — 128,000 tokens&lt;br&gt;
GPT-4o — 128,000 tokens&lt;br&gt;
o3-mini — 200,000 tokens&lt;br&gt;
Claude Sonnet 3.5 — 90,000 tokens&lt;br&gt;
Claude Sonnet 3.7 — 200,000 tokens&lt;br&gt;
Claude Sonnet 4 — 128,000 tokens&lt;br&gt;
Claude Sonnet 4.5 — 200,000 tokens (standard) / 1,000,000 tokens (beta)&lt;br&gt;
Gemini 2.0 Flash — 1,000,000 tokens&lt;br&gt;
Gemini 2.5 Pro — 128,000 tokens&lt;br&gt;
o4-mini — 128,000 (picker) / 200,000 (full version)&lt;br&gt;
Grok Code Fast 1 — 128,000 tokens&lt;/p&gt;

&lt;p&gt;Smaller Models&lt;br&gt;
GPT-3.5 Turbo — 16,384 tokens&lt;br&gt;
GPT-4 — 32,768 tokens&lt;br&gt;
GPT-4 Turbo — 128,000 tokens&lt;br&gt;
GPT-4o mini — 128,000 tokens&lt;/p&gt;

&lt;p&gt;💡 How big is that, really?&lt;/p&gt;

&lt;p&gt;Let’s take Claude Sonnet 4.5 with a 200,000-token context window.&lt;br&gt;
If one C++ or JavaScript file has ~2,000 lines and each line averages ≈ 15 tokens (including code, spaces, and comments), → one file ≈ 30,000 tokens.&lt;/p&gt;

&lt;p&gt;That means Claude Sonnet 4.5 can process around 6 full files of 2,000 lines each at once. If you’re using the 1,000,000-token extended version, that jumps to ~33 files.&lt;/p&gt;

&lt;p&gt;🧠 Conclusion&lt;/p&gt;

&lt;p&gt;So next time your AI assistant suddenly “forgets” what was said earlier or mixes details halfway through your project, remember — it’s not confused… it’s simply “lost in the middle.”&lt;br&gt;
Once that context window fills up, older information fades away to make room for new input.&lt;/p&gt;

&lt;p&gt;👉 Now you know why the “lost in the middle” problem happens — because even AI can only remember so much at once.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>githubcopilot</category>
      <category>chatgpt</category>
      <category>claude</category>
    </item>
  </channel>
</rss>
