text-gen 1.2B params
LiquidAI/LFM2.5-1.2B-Instruct
LiquidAI/LFM2.5-1.2B-Instruct
406,678 downloads
577 likes
Hardware Recommendations
Three tiers based on VRAM requirements: minimum for basic inference, comfortable for longer contexts, headroom for future model sizes.
Minimum $299
NVIDIA GeForce RTX 4060 (8GB)
8GB VRAM 115W TDP
Runs 1.2B at Q4_K_M quantization with 4K context.
Comfortable $599
NVIDIA GeForce RTX 4070 (12GB)
12GB VRAM 200W TDP
Runs 1.2B at Q5_K_M quantization with 8K+ context.
VRAM Requirements
| Quantization | VRAM | Use case |
|---|---|---|
| Q4_K_M | 3 GB | Basic inference, 4K context |
| Q5_K_M | 3 GB | Good quality, 8K+ context |
| Q8 | 4 GB | Near-lossless, 16K+ context |
| FP16 | 5 GB | Full precision, max quality |
Formula: params × bytes_per_param × 1.2 overhead + 2 GB base. Actual requirements vary by framework and context length.