Skip to content
Navigation menu
Search
Powered by Algolia
Search
Log in
Create account
Forem
Close
#
vram
Follow
Hide
Posts
Left menu
👋
Sign in
for the ability to sort posts by
relevant
,
latest
, or
top
.
Right menu
VRAMを増やせば解決する、は物理的に間違っている — HBM・CXL・Unified Memoryが取れなかったもの
plasmon
plasmon
plasmon
Follow
Apr 14
VRAMを増やせば解決する、は物理的に間違っている — HBM・CXL・Unified Memoryが取れなかったもの
#
llm
#
gpu
#
vram
Comments
Add Comment
4 min read
Q4 KV Cache Fit 32K Context into 8GB VRAM — Only Math Broke
plasmon
plasmon
plasmon
Follow
Apr 8
Q4 KV Cache Fit 32K Context into 8GB VRAM — Only Math Broke
#
llm
#
quantization
#
vram
#
localllm
Comments
Add Comment
8 min read
I built a duty-cycle throttler for my RTX 4060 (because undervolting wasn't enough)
Yaroslav Pristupa
Yaroslav Pristupa
Yaroslav Pristupa
Follow
Apr 6
I built a duty-cycle throttler for my RTX 4060 (because undervolting wasn't enough)
#
softwaredevelopment
#
gpu
#
vram
#
hardware
Comments
Add Comment
4 min read
I Couldn't Build a Local LLM PC for $1,300 — Budget Tiers and the VRAM Cliffs Between Them
plasmon
plasmon
plasmon
Follow
Apr 4
I Couldn't Build a Local LLM PC for $1,300 — Budget Tiers and the VRAM Cliffs Between Them
#
llm
#
gpu
#
localllm
#
vram
Comments
Add Comment
6 min read
Unleash Large AI Models: Extend GPU VRAM with System RAM (Nvidia Greenboost)
Umair Bilal
Umair Bilal
Umair Bilal
Follow
Mar 19
Unleash Large AI Models: Extend GPU VRAM with System RAM (Nvidia Greenboost)
#
nvidia
#
gpu
#
vram
#
ai
Comments
Add Comment
17 min read
Cloud LLMs vs Local Models: Can 32GB of VRAM Actually Compete with Claude Opus?
Alan West
Alan West
Alan West
Follow
Mar 25
Cloud LLMs vs Local Models: Can 32GB of VRAM Actually Compete with Claude Opus?
#
localllm
#
claudeopus
#
ollama
#
vram
Comments
Add Comment
4 min read
👋
Sign in
for the ability to sort posts by
relevant
,
latest
, or
top
.
We're a blogging-forward open source social network where we learn from one another
Log in
Create account