text-gen 24B params
lmstudio-community/LFM2-24B-A2B-MLX-4bit
lmstudio-community/LFM2-24B-A2B-MLX-4bit
428,374 downloads
4 likes
Hardware Recommendations
Three tiers based on VRAM requirements: minimum for basic inference, comfortable for longer contexts, headroom for future model sizes.
Minimum $799
Apple Mac mini (M4, 24GB)
24GB unified 25W TDP
Runs 24B at Q4_K_M quantization with 4K context.
Comfortable $1,399
Apple Mac mini (M4 Pro, 24GB)
24GB unified 35W TDP
Runs 24B at Q5_K_M quantization with 8K+ context.
VRAM Requirements
| Quantization | VRAM | Use case |
|---|---|---|
| Q4_K_M | 17 GB | Basic inference, 4K context |
| Q5_K_M | 20 GB | Good quality, 8K+ context |
| Q8 | 31 GB | Near-lossless, 16K+ context |
| FP16 | 60 GB | Full precision, max quality |
Formula: params × bytes_per_param × 1.2 overhead + 2 GB base. Actual requirements vary by framework and context length.