GLM-4.6-REAP-268B-A32B-GPTQMODEL-W4A16

⚑ Quantization & Hardware Support

This is a 4-bit quantized (GPTQ) version of the GLM-4.6-REAP-268B-A32B model. It is able to run on 8x RTX 3090 (24GB) with -tp 8 and 200K tokens context window using fp8 KV cache.

Quantization Details

  • Framework: Quantized using GPTQModel.
  • Experimental Optimization: This model uses an experimental modification where the whole dataset was fed to each expert during quantization to ensure improved output quality and expert stability.
  • Calibration Dataset: 1808 samples mixed from c4/en (1024), arc (300), gsm8k (300), humaneval (164), and alpaca (20).

πŸš€ Quick Start / Run Command

export VLLM_ATTENTION_BACKEND="FLASHINFER"
export TORCH_CUDA_ARCH_LIST="8.6"
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
export VLLM_MARLIN_USE_ATOMIC_ADD=1
export SAFETENSORS_FAST_GPU=1

vllm serve avtc/GLM-4.6-REAP-268B-A32B-GPTQMODEL-W4A16 \
    -tp 8 \
    --port 8000 \
    --host 0.0.0.0 \
    --uvicorn-log-level info \
    --trust-remote-code \
    --gpu-memory-utilization 0.92 \
    --max-num-seqs 1 \
    --trust-remote-code \
    --dtype=float16 \
    --seed 1234 \
    --max-model-len 202752 \
    --tool-call-parser glm45 \
    --reasoning-parser glm45 \
    --enable-auto-tool-choice \
    --enable-expert-parallel \
    --enable-sleep-mode \
    --compilation-config '{"level": 3, "cudagraph_capture_sizes": [1]}' \
    --kv-cache-dtype fp8_e5m2

Recommended Sampling Parameters:

{
    "top_p": 0.95,
    "temperature": 1.0,
    "repetition_penalty": 1.0,
    "top_k": 40,
    "min_p": 0.0
}

Example Output

Prompt:

Make an html animation of fishes in an aquarium. The aquarium is pretty, the fishes vary in colors and sizes and swim realistically. You can left click to place a piece of fish food in aquarium. Each fish chases a food piece closest to it, trying to eat it. Once there are no more food pieces, fishes resume swimming as usual.

Result: The model generated a working artifact using Kilo Code in Code mode with Temperature = 0.6 View the Result on JSFiddle

Acknowledgments

Special thanks to GPTQModel team for the quantization tools and support.


✨ Original Model Highlights

π“Œ³ REAPπ“Œ³ the Experts: Why Pruning Prevails for One-Shot MoE Compression
REAP

GLM-4.6-REAP-268B-A32B is a memory-efficient compressed variant of GLM-4.6 that maintains near-identical performance while being 25% lighter.

This model was created using REAP (Router-weighted Expert Activation Pruning), a novel expert pruning method that selectively removes redundant experts while preserving the router's independent control over remaining experts. Key features include:

  • Near-Lossless Performance: Maintains almost identical accuracy on code generation, agentic coding, and function calling tasks compared to the full 355B model
  • 25% Memory Reduction: Compressed from 355B to 268B parameters.
  • Preserved Capabilities: Retains all core functionalities including code generation, agentic workflows, repository-scale understanding, and function calling
  • Optimized for Real-World Use: Particularly effective for resource-constrained environments, local deployments, and academic research

πŸ“‹ Model Overview

GLM-4.6-REAP-268B-A32B has the following specifications:

  • Base Model: GLM-4.6
  • Compression Method: REAP (Router-weighted Expert Activation Pruning)
  • Quantization: 4-bit (W4A16) via GPTQModel
  • Compression Ratio: 25% expert pruning
  • Type: Sparse Mixture-of-Experts (SMoE) Causal Language Model
  • Number of Parameters: 268B total, 32B activated per token
  • Number of Layers: 92
  • Number of Attention Heads (GQA): 96 for Q and 8 for KV
  • Number of Experts: 120 (uniformly pruned from 160)
  • Number of Activated Experts: 8 per token
  • Context Length: 202,752 tokens
  • License: MIT

πŸ“Š Evaluations

Evaluations below refer to the original unquantized model.

For more details on the evaluation setup, refer to the REAP arXiv preprint.


🧩 Model Creation (REAP Method)

The base checkpoint was created by applying the REAP (Router-weighted Expert Activation Pruning) method uniformly across all Mixture-of-Experts (MoE) blocks of GLM-4.6, with a 25% pruning rate.

How REAP Works

REAP selects experts to prune based on a novel saliency criterion that considers both:

  • Router gate values: How frequently and strongly the router activates each expert
  • Expert activation norms: The magnitude of each expert's output contributions

This dual consideration ensures that experts contributing minimally to the layer's output are pruned, while preserving those that play critical roles in the model's computations.

Key Advantages

  • One-Shot Compression: No fine-tuning required after pruning - the model is immediately ready for deployment
  • Preserved Router Control: Unlike expert merging methods, REAP maintains the router's independent, input-dependent control over remaining experts, avoiding "functional subspace collapse"
  • Generative Task Superiority: REAP significantly outperforms expert merging approaches on generative benchmarks (code generation, creative writing, mathematical reasoning) while maintaining competitive performance on discriminative tasks

Calibration

The model was calibrated using a diverse mixture of domain-specific datasets including:

πŸ“š For more details, refer to the following resources:


βš–οΈ License

This model is derived from zai-org/GLM-4.6 and distributed under the MIT license.


🧾 Citation

If you use this checkpoint, please cite the REAP paper:

@article{lasby-reap,
  title={REAP the Experts: Why Pruning Prevails for One-Shot MoE compression},
  author={Lasby, Mike and Lazarevich, Ivan and Sinnadurai, Nish and Lie, Sean and Ioannou, Yani and Thangarasa, Vithursan},
  journal={arXiv preprint arXiv:2510.13999},
  year={2025}
}
Downloads last month
234
Safetensors
Model size
271B params
Tensor type
BF16
Β·
F16
Β·
I32
Β·
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for avtc/GLM-4.6-REAP-268B-A32B-GPTQMODEL-W4A16

Base model

zai-org/GLM-4.6
Quantized
(6)
this model

Datasets used to train avtc/GLM-4.6-REAP-268B-A32B-GPTQMODEL-W4A16