code-gen 7B params

Qwen/Qwen2.5-Coder-7B-Instruct-GPTQ-Int4

Qwen/Qwen2.5-Coder-7B-Instruct-GPTQ-Int4

720,606 downloads
13 likes
View on Hugging Face

Hardware Recommendations

Three tiers based on VRAM requirements: minimum for basic inference, comfortable for longer contexts, headroom for future model sizes.

Minimum $299

NVIDIA GeForce RTX 4060 (8GB)

8GB VRAM 115W TDP

Runs 7B at Q4_K_M quantization with 4K context.

Comfortable $599

NVIDIA GeForce RTX 4070 (12GB)

12GB VRAM 200W TDP

Runs 7B at Q5_K_M quantization with 8K+ context.

Headroom $799

Apple Mac mini (M4, 24GB)

24GB unified 25W TDP

Runs 7B at FP16 with 32K+ context or comfortably handles the next size up.

VRAM Requirements

Quantization VRAM Use case
Q4_K_M 7 GB Basic inference, 4K context
Q5_K_M 8 GB Good quality, 8K+ context
Q8 11 GB Near-lossless, 16K+ context
FP16 19 GB Full precision, max quality

Formula: params × bytes_per_param × 1.2 overhead + 2 GB base. Actual requirements vary by framework and context length.