Forem

# vram

Posts

👋 Sign in for the ability to sort posts by relevant, latest, or top.
VRAMを増やせば解決する、は物理的に間違っている — HBM・CXL・Unified Memoryが取れなかったもの

VRAMを増やせば解決する、は物理的に間違っている — HBM・CXL・Unified Memoryが取れなかったもの

Comments
4 min read
Q4 KV Cache Fit 32K Context into 8GB VRAM — Only Math Broke

Q4 KV Cache Fit 32K Context into 8GB VRAM — Only Math Broke

Comments
8 min read
I built a duty-cycle throttler for my RTX 4060 (because undervolting wasn't enough)
Cover image for I built a duty-cycle throttler for my RTX 4060 (because undervolting wasn't enough)

I built a duty-cycle throttler for my RTX 4060 (because undervolting wasn't enough)

Comments
4 min read
I Couldn't Build a Local LLM PC for $1,300 — Budget Tiers and the VRAM Cliffs Between Them

I Couldn't Build a Local LLM PC for $1,300 — Budget Tiers and the VRAM Cliffs Between Them

Comments
6 min read
Unleash Large AI Models: Extend GPU VRAM with System RAM (Nvidia Greenboost)

Unleash Large AI Models: Extend GPU VRAM with System RAM (Nvidia Greenboost)

Comments
17 min read
Cloud LLMs vs Local Models: Can 32GB of VRAM Actually Compete with Claude Opus?
Cover image for Cloud LLMs vs Local Models: Can 32GB of VRAM Actually Compete with Claude Opus?

Cloud LLMs vs Local Models: Can 32GB of VRAM Actually Compete with Claude Opus?

Comments
4 min read
👋 Sign in for the ability to sort posts by relevant, latest, or top.