Mamy Ratsimbazafy
commited on
Commit
·
4082131
1
Parent(s):
269b198
Add README.md + measurement files
Browse files- README.md +161 -0
- glm-4.6-measurements-3vs4vs5.json +0 -0
- glm-4.6-measurements-3vs4vs5.md +0 -0
- glm-4.6-measurements-4vs5vs6.json +0 -0
- glm-4.6-measurements-4vs5vs6.md +0 -0
- glm-4.6-measurements-6vs8.json +0 -0
README.md
ADDED
|
@@ -0,0 +1,161 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
base_model: zai-org/GLM-4.6
|
| 4 |
+
base_model_relation: quantized
|
| 5 |
+
quantization: exl3
|
| 6 |
+
pipeline_tag: text-generation
|
| 7 |
+
tags:
|
| 8 |
+
- exl3
|
| 9 |
+
library_name: exllamav3
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# GLM 4.6 (EXL3 Quants)
|
| 13 |
+
|
| 14 |
+
- Original Model:
|
| 15 |
+
- [zai-org/GLM-4.6](https://huggingface.co/zai-org/GLM-4.6)
|
| 16 |
+
|
| 17 |
+
This repo contains:
|
| 18 |
+
- base quants (3, 4, 5, 6, 8 bits) for Exllamav3 (using SOTA random Hadamard transforms and Trellis quantization for high-quality reconstruction)
|
| 19 |
+
- layer and tensor level KL-divergence measurements for bit-allocation optimization given a target size
|
| 20 |
+
- theoretical research related to quantization, in particular MoE quantization
|
| 21 |
+
|
| 22 |
+
## Motivation
|
| 23 |
+
|
| 24 |
+
The goals are:
|
| 25 |
+
- to provide the best possible quants for what is arguably the top general model of 2025
|
| 26 |
+
- to serve as a reference for quantization strategies (as of 2025 knowledge)
|
| 27 |
+
|
| 28 |
+
The base model is 355B parameters, which when 4-bit quantized should take 177GiB, leaving almost 20GB for context, a perfect situation when you have 196GiB of VRAM (i.e. 8x 3090/4090, 6x 5090, $x RTX A6000, 4x RTX 6000 Ada or 2x RTX Pro 6000 Blackwell). Too bad all the 4-bit quants for my usual framework of choice, vllm, start at 191~200GiB of VRAM.
|
| 29 |
+
|
| 30 |
+
So while looking for a new backend that could leverage tensor parallelism, I landed on Exllamav3. And even better it had already in place the proper tools to fully quantized Mixture-of-Experts (MoE) models, unlike vllm/llmcompressor that requires you extra code to ensure all experts are activated (or their activation might be quantized away as unimportant if you have a non-comprehensive calibration dataset).
|
| 31 |
+
|
| 32 |
+
## Artifacts
|
| 33 |
+
|
| 34 |
+
### Base Quants
|
| 35 |
+
|
| 36 |
+
The base quants use the new "MCG" multiplier from https://github.com/turboderp-org/exllamav3/pull/26#issuecomment-3395345415
|
| 37 |
+
|
| 38 |
+
- Size measured through: https://github.com/turboderp-org/exllamav3/pull/103
|
| 39 |
+
- Kullback-Leibler divergence (KL-div) and Top-K agreement measured through: https://github.com/turboderp-org/exllamav3/blob/v0.0.14/eval/model_diff.py
|
| 40 |
+
- Perplexity measured through: https://github.com/turboderp-org/exllamav3/blob/v0.0.14/eval/model_diff.py
|
| 41 |
+
- Caveat both quantization calibration and perplexity use the same dataset in EXL3, hence we have overfitting.\
|
| 42 |
+
The most appropriate measure for quality is KL-divergence (i.e. how well the quant reproduces the original probability distribution of token output, before samplers)\
|
| 43 |
+
For example the 3-bit quant have lower perplexity than the original FP16.\
|
| 44 |
+
|
| 45 |
+
| Quant | Size | KL-div (quant, FP16) | KL-div (FP16, quant) | Perplexity | Top-1 | Top-2 | Top-3 | Top-4 | Top-5 |
|
| 46 |
+
| ---------------------------------------------------------------- | ------- | -------------------- | -------------------- | ---------- | ------ | ------ | ------ | ------ | ------ |
|
| 47 |
+
| [3bpw](https://huggingface.co/mratsim/glm-4.6-exl3/tree/3bpw_H6) | 124 GiB | 0.32625636 | 0.30842110 | 4.36145115 | 0.8409 | 0.5497 | 0.3022 | 0.1527 | 0.0695 |
|
| 48 |
+
| [4bpw](https://huggingface.co/mratsim/glm-4.6-exl3/tree/4bpw_H6) | 165 GiB | 0.15579397 | 0.15313307 | 4.64835933 | 0.8969 | 0.6892 | 0.4609 | 0.2840 | 0.1611 |
|
| 49 |
+
| [5bpw](https://huggingface.co/mratsim/glm-4.6-exl3/tree/5bpw_H6) | 206 GiB | 0.11346048 | 0.10777174 | 4.46847223 | 0.9172 | 0.7553 | 0.5610 | 0.3868 | 0.2486 |
|
| 50 |
+
| [6bpw](https://huggingface.co/mratsim/glm-4.6-exl3/tree/6bpw_H6) | 247 GiB | 0.08243355 | 0.07828716 | 4.46603787 | 0.9336 | 0.7970 | 0.6218 | 0.4600 | 0.3226 |
|
| 51 |
+
| [8bpw](https://huggingface.co/mratsim/glm-4.6-exl3/tree/8bpw_H8) | 328 GiB | 0.06771311 | 0.06660905 | 4.61223994 | 0.9441 | 0.8221 | 0.6663 | 0.5155 | 0.3780 |
|
| 52 |
+
| FP16 | 656 GiB | | | 4.62864232 | | | | | |
|
| 53 |
+
|
| 54 |
+
### Optimized Quants
|
| 55 |
+
|
| 56 |
+
Coming soon.
|
| 57 |
+
|
| 58 |
+
- "opt🂡" for automatically optimized quants
|
| 59 |
+
- "tuned🂱" for hand-tuned quants
|
| 60 |
+
|
| 61 |
+
### Detailed measurements of KL-div improvements
|
| 62 |
+
|
| 63 |
+
Exllamav3 offers tools to measure per layer (with `-l2`) or even per-tensor (with `-l3`) contributions to KL-div improvements.
|
| 64 |
+
They might take 2 hours to 5 hours, if comparing 2 quants -- to 12 hours if comparing 3 quants -- to over a day of compute if comparing all quants.
|
| 65 |
+
|
| 66 |
+
Currently available are:
|
| 67 |
+
- 3vs4vs5 `-l3`: [json](glm-4.6-measurements-3vs4vs5.json), [markdown](glm-4.6-measurements-3vs4vs5.md)
|
| 68 |
+
- 4vs5vs6 `-l3`: [json](glm-4.6-measurements-4vs5vs6.json), [markdown](glm-4.6-measurements-4vs5vs6.md)
|
| 69 |
+
- 6vs8 `-l3`: [json](glm-4.6-measurements-6vs8.json)
|
| 70 |
+
|
| 71 |
+
The json file can be fed to https://github.com/turboderp-org/exllamav3/blob/v0.0.14/util/optimize.py with a target `bpw` to output an optimized quant.
|
| 72 |
+
|
| 73 |
+
Please note that from experimentations, manual tuning using the heuristics below can achieve better KL-divergence than optimizing by only mixing 3 quants and is less likely to overfit the calibration set. Having `shared experts` or `self_attn` layers use 6 or even 8-bit provide a very large improvement to KL-divergence. A measurement with all available quants might achieve better than manual tuning though overfitting becomes a bigger risk.
|
| 74 |
+
|
| 75 |
+
## Quantization theory and heuristics for manual tuning
|
| 76 |
+
|
| 77 |
+
### Layers to quantize
|
| 78 |
+
|
| 79 |
+
Quantization should be focused on Linear layers (also called Dense or Fully-Connected layers i.e. MatMul+Bias)
|
| 80 |
+
In particular quantizing LayerNorm/RMSnorm layer is strongly discouraged, see [1]
|
| 81 |
+
> LayerNorm in Quantization. Kovaleva et al. (2021); Wei et al. (2022) find that outliers in the
|
| 82 |
+
> LayerNorm parameters of BERT (Devlin et al., 2019) cause difficulties in model compression.
|
| 83 |
+
> Given the importance of LayerNorm, all the quantization methods we discuss above leave LayerNorm unquantized.
|
| 84 |
+
|
| 85 |
+
This is also reported in Intel and Nvidia repo:
|
| 86 |
+
- https://github.com/intel/neural-compressor/issues/1963#issuecomment-2274873441
|
| 87 |
+
- https://github.com/NVIDIA/TensorRT/issues/4084#issuecomment-2294513950
|
| 88 |
+
|
| 89 |
+
EXL3 can only quantize linear layers
|
| 90 |
+
|
| 91 |
+
### Tensors to up-quantize
|
| 92 |
+
|
| 93 |
+
If there is enough bits, down projections should be prioritized.
|
| 94 |
+
|
| 95 |
+
According to [4]
|
| 96 |
+
> Fig. 3: Maximum absolute value over layers for a LLaMA3-8B.
|
| 97 |
+
> Each color represent a different projection and we clearly see that down_proj has the biggest
|
| 98 |
+
> spikes in input and output. We also observe that RMSNorm propagate spikes through the entire model
|
| 99 |
+
|
| 100 |
+
According to [5]
|
| 101 |
+
> Figure 5(a) illustrates the extremal ratio across layers and modules in LLaMA2-7B, highlighting
|
| 102 |
+
> that weight outliers are concentrated in the down-projection matrices Wdown
|
| 103 |
+
> ℓ of the second layer and
|
| 104 |
+
> the last two layers. Figures 5(b) and 5(c) provide detailed visualizations of these outliers in the last
|
| 105 |
+
> two layers.
|
| 106 |
+
|
| 107 |
+
### Mixture-of-Experts quantization (MoE)
|
| 108 |
+
|
| 109 |
+
Mixture-of-Experts require specific quantization techniques.
|
| 110 |
+
|
| 111 |
+
#### Mixed-precision quantization
|
| 112 |
+
|
| 113 |
+
Some layers have a higher impact on LLM performance.
|
| 114 |
+
According to [2], spending more bits in attention layers results in large gain compared to spending them in FFN layers.
|
| 115 |
+
According to [3] on 2-bit quantization:
|
| 116 |
+
- quantizing expert FFN layers do not seriously impact model quality
|
| 117 |
+
- quantizing cross-attention has some impact
|
| 118 |
+
- quantizing self-attention has a large impact
|
| 119 |
+
- quantizing dense FFN has a very significant impact
|
| 120 |
+
|
| 121 |
+
Hence to preserve model quality we should choose not to quantize dense FFN layers and self-attention layers.
|
| 122 |
+
|
| 123 |
+
We notice that:
|
| 124 |
+
- official MXFP4 weights of gpt-oss-120b from OpenAI keep self-attention in BF16:
|
| 125 |
+
- https://huggingface.co/openai/gpt-oss-120b/blob/main/model.safetensors.index.json
|
| 126 |
+
- NVFP4 weights of DeepSeek-R1 quantized by Nvidia also keep self-attention in BF16:
|
| 127 |
+
- https://huggingface.co/nvidia/DeepSeek-R1-0528-FP4/blob/main/model.safetensors.index.json
|
| 128 |
+
|
| 129 |
+
#### Layers with high-impact
|
| 130 |
+
|
| 131 |
+
According to [2], giving more bits to the first `k` blocks have a significantly higher impact on model quality than for the same last `k` blocks.
|
| 132 |
+
|
| 133 |
+
#### Expert quantization
|
| 134 |
+
|
| 135 |
+
When quantizing MoE, quantizing activations is tricky as only a subset of experts are activated per request.
|
| 136 |
+
|
| 137 |
+
EXL3 has the tooling in-place to ensure all experts are activated during quantization, though it is unsure if the dataset should be expanded to be diverse enough so that all experts have a high likelyhood of taking the full range of values they can exhibit to avoid clipping.
|
| 138 |
+
|
| 139 |
+
## References
|
| 140 |
+
|
| 141 |
+
1. Why Do Some Inputs Break Low-Bit LLM Quantization? (2025)\
|
| 142 |
+
Ting-Yun Chang, Muru Zhang, Jesse Thomason, Robin Jia\
|
| 143 |
+
https://arxiv.org/pdf/2506.12044
|
| 144 |
+
|
| 145 |
+
2. Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark (2024)\
|
| 146 |
+
Pingzhi Li, Xiaolong Jin, Yu Cheng, Tianlong Chen\
|
| 147 |
+
https://arxiv.org/pdf/2406.08155v1
|
| 148 |
+
|
| 149 |
+
3. Mixture of Quantized Experts (MoQE): Complementary Effect of Low-bit Quantization and Robustness (2023)\
|
| 150 |
+
Young Jin Kim, Raffy Fahim, Hany Hassan Awadalla\
|
| 151 |
+
https://arxiv.org/pdf/2310.02410
|
| 152 |
+
|
| 153 |
+
4. Precision Where It Matters: A Novel Spike\
|
| 154 |
+
Aware Mixed-Precision Quantization Strategy for\
|
| 155 |
+
LLaMA-based Language Models (2025)\
|
| 156 |
+
Lucas Maisonnave, Cyril Moineau, Olivier Bichler, and Fabrice Rastello\
|
| 157 |
+
https://arxiv.org/pdf/2504.21553
|
| 158 |
+
|
| 159 |
+
5. Systematic Outliers in Large Language Models (2025)\
|
| 160 |
+
Yongqi An, Xu Zhao, Tao Yu, Ming Tang, Jinqiao Wang\
|
| 161 |
+
https://arxiv.org/pdf/2502.06415v2
|
glm-4.6-measurements-3vs4vs5.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
glm-4.6-measurements-3vs4vs5.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
glm-4.6-measurements-4vs5vs6.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
glm-4.6-measurements-4vs5vs6.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
glm-4.6-measurements-6vs8.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|