text-gen 32B params
deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
1,099,174 downloads
1,554 likes
Hardware Recommendations
Three tiers based on VRAM requirements: minimum for basic inference, comfortable for longer contexts, headroom for future model sizes.
Minimum $799
Apple Mac mini (M4, 24GB)
24GB unified 25W TDP
Runs 32B at Q4_K_M quantization with 4K context.
Comfortable $1,999
NVIDIA GeForce RTX 5090 (32GB)
32GB VRAM 575W TDP
Runs 32B at Q5_K_M quantization with 8K+ context.
Headroom $7,999
Apple Mac Studio (M4 Max, 128GB)
128GB unified 85W TDP
Runs 32B at FP16 with 32K+ context or comfortably handles the next size up.
VRAM Requirements
| Quantization | VRAM | Use case |
|---|---|---|
| Q4_K_M | 22 GB | Basic inference, 4K context |
| Q5_K_M | 26 GB | Good quality, 8K+ context |
| Q8 | 41 GB | Near-lossless, 16K+ context |
| FP16 | 79 GB | Full precision, max quality |
Formula: params × bytes_per_param × 1.2 overhead + 2 GB base. Actual requirements vary by framework and context length.