text-gen 8B params
RedHatAI/Meta-Llama-3.1-8B-Instruct-FP8
RedHatAI/Meta-Llama-3.1-8B-Instruct-FP8
468,523 downloads
44 likes
Hardware Recommendations
Three tiers based on VRAM requirements: minimum for basic inference, comfortable for longer contexts, headroom for future model sizes.
Minimum $299
NVIDIA GeForce RTX 4060 (8GB)
8GB VRAM 115W TDP
Runs 8B at Q4_K_M quantization with 4K context.
Comfortable $599
NVIDIA GeForce RTX 4070 (12GB)
12GB VRAM 200W TDP
Runs 8B at Q5_K_M quantization with 8K+ context.
VRAM Requirements
| Quantization | VRAM | Use case |
|---|---|---|
| Q4_K_M | 7 GB | Basic inference, 4K context |
| Q5_K_M | 8 GB | Good quality, 8K+ context |
| Q8 | 12 GB | Near-lossless, 16K+ context |
| FP16 | 22 GB | Full precision, max quality |
Formula: params × bytes_per_param × 1.2 overhead + 2 GB base. Actual requirements vary by framework and context length.