NVIDIA-DGX-Spark
Collection
A collection of models I re-quantized, or distilled, or in some other way altered from the base to achieve a runnable status on one NVIDIA-DGX-Spark. • 2 items • Updated
NVFP4 (4-bit floating point) quantization of Qwen/Qwen3-Coder-30B-A3B-Instruct, optimized for NVIDIA Blackwell GPUs.
| Property | Value |
|---|---|
| Base Model | Qwen/Qwen3-Coder-30B-A3B-Instruct |
| Architecture | Qwen3MoeForCausalLM (Mixture-of-Experts) |
| Total Parameters | 30B (3B active per token) |
| Experts | 128 per layer |
| Quantization | NVFP4 (4-bit NV floating point) |
| KV Cache | FP8 (8-bit float) |
| Original Precision | BF16 |
| Quantized Size | ~57 GB |
| Quantization Tool | NVIDIA ModelOpt 0.41.0 |
| Calibration | 512 samples (synthetic) |
| Hardware | NVIDIA DGX Spark GB10 (Blackwell) |
nvidia-modelopt with NVFP4_DEFAULT_CFGlm_head and all MoE router/gate layers (48 total) — these remain in original precision to preserve routing qualitysave_pretrained with manual quantization_config injection (ModelOpt 0.41.0 native export does not yet support Qwen3MoeExperts)vllm serve kleinpanic93/Qwen3-Coder-30B-A3B-Instruct-NVFP4 \
--quantization modelopt \
--trust-remote-code \
--max-model-len 32768
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"kleinpanic93/Qwen3-Coder-30B-A3B-Instruct-NVFP4",
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
"kleinpanic93/Qwen3-Coder-30B-A3B-Instruct-NVFP4"
)
{
"source_model": "Qwen/Qwen3-Coder-30B-A3B-Instruct",
"quantization": "NVFP4",
"tool": "nvidia-modelopt 0.41.0",
"export_method": "save_pretrained_manual",
"calib_size": 512,
"calib_dataset": "synthetic-random",
"hardware": "NVIDIA GB10 (Blackwell)",
"elapsed_sec": 472
}
save_pretrained fallback rather than ModelOpt's native HF checkpoint exporter, since Qwen3MoeExperts is not yet in ModelOpt 0.41.0's export allowlist. The quantization math is identical — only the serialization path differs.This model inherits the Apache 2.0 license from the base Qwen3-Coder-30B-A3B-Instruct model.
Base model
Qwen/Qwen3-Coder-30B-A3B-Instruct