File size: 12,615 Bytes
e64b706
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cfcc333
 
 
 
 
 
 
 
 
e64b706
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
---
license: apache-2.0
base_model:
- unsloth/Qwen3-VL-8B-Instruct
tags:
- gguf
- text-generation
- quantized
- cpu
- gpu
- mxfp4
- mxfp4_moe
- qwen3
- mxfp4_hybrid
---

# DEPRECIATED!

This model was surpassed by the new MagicQuant hybrids. The collection can be found here:

https://huggingface.co/collections/magiccodingman/magic-quant

Use the new version. This shown MXFP4 hybrid is no longer viable in comparison, nor really useable. The data collected is good for understanding and research, but it's not as good for real use.


# Unsloth - Qwen3 VL 8B Instruct MXFP4 Hybrid GGUF

**Dense model utilizing MXFP4_MOE with hybrid weights on a dense model. Achieving interesting results that show smaller file size, more TPS, and near lossless precision.**

## **Use one of the 3 found magic models!**

Stats compared against the standard Q8_0 (precision loss still compared to F16)

* **MXFP4_MOE-Q6_K**  

  5.2% smaller than Q8 • 264.49 TPS • 0.0992% precision loss 
  
---
  

* **MXFP4_MOE-output_q6_K-router_gate_emb_q6_K**
  
  10.1% smaller than Q8 • 247.84 TPS • 0.1078% precision loss 
  
  _(TLDR: The perfect balance)_


---

This repository contains a set of hybrid MXFP4 quantized GGUF models designed to explore a surprising discovery:

> A carefully targeted combination of MXFP4 + high-precision embeddings/output weights can deliver near-Q8 accuracy with Q4–Q6 level throughput and smaller file sizes than Q8.

Unlike pure MXFP4, which heavily degrades dense models. This hybrid method selectively protects tensors that matter most for semantic stability, while allowing MXFP4 to accelerate everything else.

> **This is experimental**. And should be treated as such. I am more than encouraging people to use these models and leave feedback! Though precision loss seemed near lossless, did the hybrid models act strange in certain situations? Worse or better on some topics compared to the original model? Did it do better/worse overall on everything? I'd love to hear back from others!

---

# The Magic Models - Use One Of These 3 Models!

Each of these models achieved:

> **File size reduction compared to the Q8_0**
> 
> **Better precision loss scores than the pure Q6_K**
> 
> **Achieving noticeably better TPS than a Q4_K_M**

_I have personally deemed these in the category of "Q7.5" quantization._

The following are the special models to note from what was created. Each of the 3 models shown below are being compared to the Q8 model.

#### MXFP4_MOE-Q8

> **(5.2% smaller than Q8 • 264.49 TPS • 0.0992% precision loss )**

Honestly, this one is hands down the best. Best TPS, lowest precision loss, this is the one you want.

The following was the conversion script:
```bash
llama-quantize \
  --tensor-type token_embd.weight=Q8_0 \
  --tensor-type output.weight=Q8_0 \
  "Path_To_F16_GGUF.gguf" \
  "Path_To_GGUF.gguf" \
  mxfp4_moe
```

#### MXFP4_MOE-output_q6_K-router_gate_emb_q6_K

> **(10.1% smaller than Q8 • 247.84 TPS • 0.1078% precision loss)**

Still a great version, but you really only want this if you truly can't spare the extra 400 MB to use the MXFP4_MOE-Q8 instead.

The following was the conversion script:
```bash
llama-quantize \
  --tensor-type token_embd.weight=Q6_K \
  --tensor-type output.weight=Q6_K \
  --tensor-type 'router.*'=Q6_K \
  --tensor-type 'gate.*'=Q6_K \
  "Path_To_F16_GGUF.gguf" \
  "Path_To_GGUF.gguf" \
  mxfp4_moe
```

---

# MXFP4_MOE Hybrid Naming Scheme & Synopsis

Multiple different combinations of converted models were created. The results were interesting to say the least. The following table will explain my naming scheme to what was done to the model to create it.

| Suffix Example                      | Meaning                                |
| ----------------------------------- | -------------------------------------- |
| `MXFP4_MOE`                         | Pure MXFP4 pipeline                    |
| `MXFP4_MOE-Q8`                      | Embedding/output in Q8_0               |
| `MXFP4_MOE-F16`                     | Embedding/output in F16                |
| `output_mxfp4-embd_q8`              | Output → MXFP4, Embedding → Q8         |
| `output_mxfp4-router_gate_emb_q5_K` | Output → MXFP4, Emb/Router/Gate → Q5_K |
| `MXFP4_MOE-Q6_K`                    | Both embedding + output in Q6_K        |
| `Q8_0`, `Q6_K`, `Q4_K_M`            | Pure model-wide quantizations          |

The results achieved were interesting to say the least. It was a brute force game of mass creating models with hybrid methods to find combinations that didn't cause too much noise and paired well with MXFP4.

This repo showcases the converted models, whether good or bad that was created. But, I have been testing other models in different combinations as well. **The winning hybrid combinations shown in this repo DOES NOT always equate to the same results on different models.**

Some models do better or worse with different kinds of combinations. It depends if it's dense, MOE, and much more. Many times the results surprise me. Many models no matter the combination will not play nice with MXFP4. At least with the methods shown here.

---

## Benchmark Methodology

All models were tested with a unified automated harness using `llama.cpp` tools.

**Included tests:**

- **Throughput:**  
    `llama-bench` with descending GPU offload (`-ngl 35 → 0`) and automatic OOM retry.  
    Highest successful TPS is recorded.
    
- **Perplexity:**  
    Three domains: **general**, **code**, **math**.  
    Each uses an auto-generated corpus of ~**32k tokens**.  
    Perplexity is computed with `llama-perplexity` at **2048-token** context.  
    Same GPU retry logic as above.
    
- **Precision loss:**  
    Each model is compared to its **family F16 baseline**.  
    Precision-loss % is computed for all PPL domains, plus an averaged score.  
    Models are ranked by this metric.
    

---

### Table - Overview of Results

Comparing to F16.

| model_name | size_reduction | tps_change |
| ---------- | -------------- | ---------- |
| MXFP4_MOE-F16 | 36.24% | 11.15% |
| MXFP4_MOE-Q6_K | 49.61% | 68.13% |
| MXFP4_MOE-output_q6_K-router_gate_emb_q6_K | 52.23% | 57.55% |
| Q8_0 | 46.85% | 53.86% |
| MXFP4_MOE-Q8 | 46.85% | 42.92% |
| MXFP4_MOE-output_q8-embd_mxfp4 | 48.89% | 50.88% |
| Q6_K | 58.98% | 63.61% |
| MXFP4_MOE-Q5_K | 51.05% | 70.71% |
| Q5_K_M | 64.29% | 60.3% |
| MXFP4_MOE-Q4_K | 52.49% | 75.39% |
| Q4_K_M | 69.33% | 63.58% |
| MXFP4_MOE-output_mxfp4-embd_q5_K | 52.23% | 82.18% |
| MXFP4_MOE-output_mxfp4-router_gate_emb_q6_K | 54.39% | 90.76% |
| MXFP4_MOE-output_mxfp4-embd_q8 | 50.85% | 80.95% |
| MXFP4_MOE-output_mxfp4-router_gate_emb_q8 | 50.85% | 82.88% |
| MXFP4_MOE-output_mxfp4-embd_q4_K | 52.75% | 74.42% |
| MXFP4_MOE-output_mxfp4-embd_q6_K | 51.77% | 83.91% |
| MXFP4_MOE-output_mxfp4-router_gate_emb_q5_K | 56.42% | 79.35% |
| MXFP4_MOE-output_mxfp4-router_gate_emb_q4_K | 58.26% | 82.96% |
| MXFP4_MOE | 73.39% | 108.18% |

* All percentages compared against the selected family F16 baseline.

---

### Table - File Size + TPS + Avg Precision Loss

| model_name                                  | file_size_gb | bench_tps | avg_prec_loss |
| ------------------------------------------- | ------------ | --------- | ------------- |
| F16                                         | 15.26        | 157.31    | 0             |
| MXFP4_MOE-F16                               | 9.73         | 174.85    | 0.0876        |
| MXFP4_MOE-Q6_K                              | 7.69         | 264.49    | 0.0992        |
| MXFP4_MOE-output_q6_K-router_gate_emb_q6_K  | 7.29         | 247.84    | 0.1078        |
| Q8_0                                        | 8.11         | 242.04    | 0.1286        |
| MXFP4_MOE-Q8                                | 8.11         | 224.83    | 0.1299        |
| MXFP4_MOE-output_q8-embd_mxfp4              | 7.8          | 237.35    | 0.1764        |
| Q6_K                                        | 6.26         | 257.37    | 0.2061        |
| MXFP4_MOE-Q5_K                              | 7.47         | 268.54    | 0.4262        |
| Q5_K_M                                      | 5.45         | 252.17    | 0.966         |
| MXFP4_MOE-Q4_K                              | 7.25         | 275.9     | 1.2426        |
| Q4_K_M                                      | 4.68         | 257.33    | 1.2518        |
| MXFP4_MOE-output_mxfp4-embd_q5_K            | 7.29         | 286.59    | 6.1681        |
| MXFP4_MOE-output_mxfp4-router_gate_emb_q6_K | 6.96         | 300.09    | 6.189         |
| MXFP4_MOE-output_mxfp4-embd_q8              | 7.5          | 284.65    | 6.1893        |
| MXFP4_MOE-output_mxfp4-router_gate_emb_q8   | 7.5          | 287.69    | 6.1893        |
| MXFP4_MOE-output_mxfp4-embd_q4_K            | 7.21         | 274.38    | 6.2107        |
| MXFP4_MOE-output_mxfp4-embd_q6_K            | 7.36         | 289.31    | 6.2136        |
| MXFP4_MOE-output_mxfp4-router_gate_emb_q5_K | 6.65         | 282.13    | 6.4579        |
| MXFP4_MOE-output_mxfp4-router_gate_emb_q4_K | 6.37         | 287.82    | 6.5541        |
| MXFP4_MOE                                   | 4.06         | 327.49    | 12.3801       |

* Bench NGL was 35
* Utilized CUDA

---

### Table - PPL Columns

| model_name | gen | gen_er | code | code_er | math | math_er |
| ---------- | ---- | ------- | ----- | -------- | ------ | -------- |
| F16 | 7.4343 | 0.1566 | 1.4053 | 0.0087 | 5.8563 | 0.1081 |
| MXFP4_MOE-F16 | 7.4452 | 0.1569 | 1.4053 | 0.0087 | 5.8631 | 0.1083 |
| MXFP4_MOE-Q6_K | 7.4477 | 0.1567 | 1.4057 | 0.0087 | 5.8615 | 0.1082 |
| MXFP4_MOE-output_q6_K-router_gate_emb_q6_K | 7.446 | 0.1567 | 1.406 | 0.0087 | 5.8631 | 0.1082 |
| Q8_0 | 7.4515 | 0.157 | 1.4056 | 0.0087 | 5.8641 | 0.1083 |
| MXFP4_MOE-Q8 | 7.4515 | 0.157 | 1.4057 | 0.0087 | 5.8639 | 0.1082 |
| MXFP4_MOE-output_q8-embd_mxfp4 | 7.456 | 0.1571 | 1.4059 | 0.0087 | 5.8677 | 0.1083 |
| Q6_K | 7.452 | 0.1569 | 1.4087 | 0.0088 | 5.8644 | 0.1084 |
| MXFP4_MOE-Q5_K | 7.4899 | 0.1578 | 1.4058 | 0.0087 | 5.8853 | 0.1087 |
| Q5_K_M | 7.5473 | 0.1597 | 1.4125 | 0.0089 | 5.907 | 0.1099 |
| MXFP4_MOE-Q4_K | 7.5898 | 0.1606 | 1.4119 | 0.0089 | 5.9246 | 0.1096 |
| Q4_K_M | 7.5635 | 0.1584 | 1.4211 | 0.0089 | 5.9086 | 0.1086 |
| MXFP4_MOE-output_mxfp4-embd_q5_K | 7.9882 | 0.1678 | 1.4246 | 0.0089 | 6.4232 | 0.1197 |
| MXFP4_MOE-output_mxfp4-router_gate_emb_q6_K | 7.9946 | 0.1681 | 1.4248 | 0.0089 | 6.421 | 0.1197 |
| MXFP4_MOE-output_mxfp4-embd_q8 | 7.9925 | 0.168 | 1.4243 | 0.0089 | 6.4248 | 0.1198 |
| MXFP4_MOE-output_mxfp4-router_gate_emb_q8 | 7.9925 | 0.168 | 1.4243 | 0.0089 | 6.4248 | 0.1198 |
| MXFP4_MOE-output_mxfp4-embd_q4_K | 7.9963 | 0.1681 | 1.4241 | 0.0089 | 6.4264 | 0.1198 |
| MXFP4_MOE-output_mxfp4-embd_q6_K | 7.9964 | 0.168 | 1.4243 | 0.0089 | 6.426 | 0.1198 |
| MXFP4_MOE-output_mxfp4-router_gate_emb_q5_K | 8.0245 | 0.1689 | 1.426 | 0.0089 | 6.4397 | 0.1203 |
| MXFP4_MOE-output_mxfp4-router_gate_emb_q4_K | 8.0288 | 0.1689 | 1.429 | 0.0089 | 6.4407 | 0.1202 |
| MXFP4_MOE | 8.5631 | 0.1809 | 1.4779 | 0.0096 | 6.8396 | 0.1299 |

* gen = ppl_general
* gen_er = ppl_general_error
* code = ppl_code
* code_er = ppl_code_error
* math = ppl_math
* math_er = ppl_math_error

---

### Table - Precision Loss Columns

| model_name | loss_general | loss_code | loss_math |
| ---------- | ------------ | ---------- | ---------- |
| F16 | 0 | 0 | 0 |
| MXFP4_MOE-F16 | 0.1466 | 0 | 0.1161 |
| MXFP4_MOE-Q6_K | 0.1802 | 0.0285 | 0.0888 |
| MXFP4_MOE-output_q6_K-router_gate_emb_q6_K | 0.1574 | 0.0498 | 0.1161 |
| Q8_0 | 0.2314 | 0.0213 | 0.1332 |
| MXFP4_MOE-Q8 | 0.2314 | 0.0285 | 0.1298 |
| MXFP4_MOE-output_q8-embd_mxfp4 | 0.2919 | 0.0427 | 0.1947 |
| Q6_K | 0.2381 | 0.2419 | 0.1383 |
| MXFP4_MOE-Q5_K | 0.7479 | 0.0356 | 0.4952 |
| Q5_K_M | 1.52 | 0.5123 | 0.8657 |
| MXFP4_MOE-Q4_K | 2.0917 | 0.4697 | 1.1663 |
| Q4_K_M | 1.7379 | 1.1243 | 0.8931 |
| MXFP4_MOE-output_mxfp4-embd_q5_K | 7.4506 | 1.3734 | 9.6802 |
| MXFP4_MOE-output_mxfp4-router_gate_emb_q6_K | 7.5367 | 1.3876 | 9.6426 |
| MXFP4_MOE-output_mxfp4-embd_q8 | 7.5084 | 1.352 | 9.7075 |
| MXFP4_MOE-output_mxfp4-router_gate_emb_q8 | 7.5084 | 1.352 | 9.7075 |
| MXFP4_MOE-output_mxfp4-embd_q4_K | 7.5596 | 1.3378 | 9.7348 |
| MXFP4_MOE-output_mxfp4-embd_q6_K | 7.5609 | 1.352 | 9.728 |
| MXFP4_MOE-output_mxfp4-router_gate_emb_q5_K | 7.9389 | 1.473 | 9.9619 |
| MXFP4_MOE-output_mxfp4-router_gate_emb_q4_K | 7.9967 | 1.6865 | 9.979 |
| MXFP4_MOE | 15.1837 | 5.1662 | 16.7905 |

* loss_general = precision_loss_general_pct
* loss_code = precision_loss_code_pct
* loss_math = precision_loss_math_pct