phaedawg commited on
Commit
77682a9
·
verified ·
1 Parent(s): af8a928

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +123 -0
README.md ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ library_name: vllm
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - text-generation
8
+ - conversational
9
+ - moe
10
+ - awq
11
+ - w8a16
12
+ - group-size-32
13
+ - compressed-tensors
14
+ - quantized
15
+ base_model: zai-org/GLM-4.7-Flash
16
+ base_model_relation: quantized
17
+ quantized_by: TheHouseOfTheDude
18
+ license: other
19
+ ---
20
+
21
+ # GLM-4.7-Flash_AWQ — **Quantized** (AWQ · W8A16_GS32 · vLLM nightly + Transformers 5.0)
22
+
23
+ This repository provides an **AWQ quantized** build of **GLM-4.7-Flash** repackaged for **vLLM** using the **compressed-tensors** runtime layout.
24
+
25
+ > **Why this quant is different (MoE-aware calibration)**
26
+ >
27
+ > - During calibration we **activate all experts** inside each MoE block (not just top-k chosen by the router).
28
+ > - This captures **worst-case activations** across the entire mixture, producing **more robust scales** with lower drift when rare experts fire at inference time.
29
+ > - The quant script explicitly **does not ignore shared experts** (fixes typical smoothing issues in MoE with AWQ). :contentReference[oaicite:0]{index=0}
30
+ >
31
+ > **Runtime requirements:**
32
+ > • **vLLM nightly** build (MoE + GLM-Flash path) **and** **Transformers 5.0**.
33
+ > • `trust_remote_code` must be enabled.
34
+
35
+ ---
36
+
37
+ ## Revisions & Branches
38
+
39
+ > The **`main`** branch is a landing page (model card + links). The runnable quant lives under:
40
+
41
+ - **W8A16_GS32** — **Weight INT8**, **Activation 16-bit**, **Group Size 32** (highest fidelity among W8A16 variants)
42
+
43
+ **Quick link:**
44
+ - https://huggingface.co/TheHouseOfTheDude/GLM-4.7-Flash_AWQ/tree/W8A16_GS32
45
+
46
+ ---
47
+
48
+ ## What’s inside
49
+
50
+ - Sharded **quantized** weights (`*.safetensors`) + index (`model.safetensors.index.json`)
51
+ - `config.json` with **compressed-tensors** metadata (`quantization_config`, `weight_format`, etc.)
52
+ - Tokenizer artifacts (`tokenizer.json`, `tokenizer.model`, merges/vocab as applicable)
53
+
54
+ > This package targets **vLLM** (compressed-tensors). Loading directly with vanilla 🤗 `from_pretrained` is not supported.
55
+
56
+ ---
57
+
58
+ ## Quantization & calibration details (from the provided script)
59
+
60
+ **Method / scheme**
61
+ - **AWQ** (weight-only) via `llmcompressor.oneshot` with an **AWQModifier** targeting **Linear** layers.
62
+ - **W8A16_GS32:** INT8 weights (`num_bits=8`, `symmetric=True`), **group strategy** with **`group_size=32`**; activations remain **FP16/BF16** at runtime.
63
+ - **Ignored layers:** a short, **script-defined ignore list**; importantly, **shared experts are _not_ ignored** to avoid smoothing errors. :contentReference[oaicite:1]{index=1}
64
+
65
+ **MoE handling**
66
+ - Each `Glm4MoeLiteMoE` module is replaced at calibration time with a **Calibration** wrapper that sets `calibrate_all_experts=True`, ensuring **every expert is exercised** while collecting activation statistics. :contentReference[oaicite:2]{index=2}
67
+
68
+ **Datasets & sampling**
69
+ - **Total calibration samples:** **512**
70
+ - **Max sequence length:** **2048** tokens
71
+ - **Data mix (60/40):**
72
+ - **Neural Magic**: `neuralmagic/LLM_compression_calibration` (chat-style `messages` rendered with `apply_chat_template`)
73
+ - **Rombo**: `Rombo-Org/Optimized_Reasoning` (instructions + optional inputs/outputs stitched into plain text)
74
+ Both are tokenized **without padding**, **truncated to 2048**, with `add_special_tokens=False`. :contentReference[oaicite:3]{index=3}
75
+
76
+ **Export**
77
+ - Saved with `save_compressed=True` to embed compressed-tensors metadata for vLLM.
78
+ - Minor post-save cleanup (e.g., remove `auto_map` from `config.json`) to avoid loader issues. :contentReference[oaicite:4]{index=4}
79
+
80
+ ---
81
+
82
+ ## Why **Group Size 32** (W8A16_GS32)
83
+
84
+ - **Group size** controls how many consecutive weights share one set of quantization scales.
85
+ - **GS32** (this branch) provides **finer-grained scaling** than GS64/128 → typically **better fidelity** (perplexity / task metrics) at a small cost in metadata/bandwidth.
86
+ - This is especially helpful for **MoE** where experts can exhibit diverse activation statistics: smaller groups better preserve expert-specific nuances.
87
+
88
+ ---
89
+
90
+ ## Quickstart — vLLM (nightly) + Transformers 5.0
91
+
92
+ **Environment requirements**
93
+ - vLLM **nightly** build
94
+ - Transformers **5.0**
95
+ - `trust_remote_code=True`
96
+
97
+ **Recommended runtime flags (GLM-4.7-Flash MoE path):**
98
+ - `--enable-expert-parallel` to distribute experts across devices
99
+ - `--tool-call-parser glm47` / `--reasoning-parser glm45` for GLM-style tool & reasoning
100
+ - FlashInfer toggles as below (per script guidance)
101
+
102
+ **Example command (provided by author):**
103
+ ```bash
104
+ export VLLM_USE_DEEP_GEMM=0
105
+ export VLLM_USE_FLASHINFER_MOE_FP16=1
106
+ export VLLM_USE_FLASHINFER_SAMPLER=0
107
+ export OMP_NUM_THREADS=4
108
+
109
+ CUDA_VISIBLE_DEVICES=4,5 vllm serve \
110
+ /media/fmodels/TheHouseOfTheDude/GLM-4.7-Flash_AWQ/W8A16_GS32 \
111
+ --served-model-name GLM-4.7-Flash_AWQ-W8A16_GS32 \
112
+ --swap-space 4 \
113
+ --max-model-len 80896 \
114
+ --gpu-memory-utilization 0.9 \
115
+ --tensor-parallel-size 2 \
116
+ --enable-expert-parallel \
117
+ --enable-auto-tool-choice \
118
+ --tool-call-parser glm47 \
119
+ --reasoning-parser glm45 \
120
+ --trust-remote-code \
121
+ --host 0.0.0.0 \
122
+ --port 8000 \
123
+ --api-key REDACTED