<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Vishal Vishwakarma</title>
    <description>The latest articles on Forem by Vishal Vishwakarma (@vishalvi).</description>
    <link>https://forem.com/vishalvi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/vishalvi"/>
    <language>en</language>
    <item>
      <title>How Much GPU Memory Does Your LLM Actually Need?</title>
      <dc:creator>Vishal Vishwakarma</dc:creator>
      <pubDate>Thu, 02 Apr 2026 07:13:11 +0000</pubDate>
      <link>https://forem.com/vishalvi/how-much-gpu-memory-does-your-llm-actually-need-40le</link>
      <guid>https://forem.com/vishalvi/how-much-gpu-memory-does-your-llm-actually-need-40le</guid>
      <description>&lt;p&gt;GPU memory is the binding constraint for LLM deployment. The model's parameters must reside in VRAM alongside everything the runtime needs: the key-value cache, intermediate activations, and the serving framework's own buffers. Getting this budget wrong in either direction has real consequences. Underprovisioning leads to OOM crashes under load.&lt;/p&gt;

&lt;p&gt;Overprovisioning means paying for VRAM that sits idle, and the difference between a two-GPU and four-GPU configuration is $2,000-4,000 per month.&lt;/p&gt;

&lt;p&gt;The weight formula&lt;/p&gt;

&lt;p&gt;Memory (GB) = Parameters (B) x Bytes per Parameter&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z3c8zz064cg9abrazek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z3c8zz064cg9abrazek.png" alt="Weight formula table" width="800" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These numbers cover weights only. In practice, you need an additional 20-40% for the KV cache, activations, and framework overhead.&lt;/p&gt;

&lt;p&gt;The KV cache is where teams underestimate&lt;/p&gt;

&lt;p&gt;Model weights are the predictable part. What makes GPU sizing deceptive is the key-value cache: for each concurrent request, the model stores key and value vectors for every token in the sequence, and this cache grows linearly with context length and batch size.&lt;/p&gt;

&lt;p&gt;A 70B model's weights might fit comfortably on two A100 80GB GPUs. But add KV cache for 32K context across 8 concurrent requests and you need another 40+ GB on top of that.&lt;/p&gt;

&lt;p&gt;Quick sizing reference&lt;/p&gt;

&lt;p&gt;Approximate requirements in FP16 with vLLM, batch size 8, 4K context:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo30jzwakzn7mm387giib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo30jzwakzn7mm387giib.png" alt="Quick sizing reference" width="765" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Quantization is the single most impactful lever. INT4 cuts memory by 75% compared to FP16, and for most production inference tasks the quality difference is negligible.&lt;/p&gt;

&lt;p&gt;Calculate it for your workload&lt;/p&gt;

&lt;p&gt;The formulas above work for back-of-envelope estimates, but real workloads involve specific batch sizes, context distributions, and throughput targets. We built a &lt;a href="https://inferbase.ai/infrastructure/sizing" rel="noopener noreferrer"&gt;GPU sizing calculator&lt;/a&gt; that estimates VRAM, throughput, and latency using a roofline model validated against vLLM benchmarks.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://inferbase.ai/methodology" rel="noopener noreferrer"&gt;methodology&lt;/a&gt; is public if you want to verify the assumptions. The model catalog lets you filter and compare across providers on pricing, benchmarks, and capabilities.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>gpu</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
