Update README.md
Browse files
README.md
CHANGED
|
@@ -31,7 +31,7 @@ This optimization reduces the number of bits per parameter from 16 to 8, reducin
|
|
| 31 |
|
| 32 |
Only the weights of the linear operators within transformers blocks are quantized. Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the INT8 and floating point representations of the quantized weights.
|
| 33 |
The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
|
| 34 |
-
GPTQ used a 1% damping factor and 256 sequences
|
| 35 |
|
| 36 |
## Deployment
|
| 37 |
|
|
|
|
| 31 |
|
| 32 |
Only the weights of the linear operators within transformers blocks are quantized. Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the INT8 and floating point representations of the quantized weights.
|
| 33 |
The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
|
| 34 |
+
GPTQ used a 1% damping factor and 256 sequences sequences taken from Neural Magic's [LLM compression calibration dataset](https://huggingface.co/datasets/neuralmagic/LLM_compression_calibration).
|
| 35 |
|
| 36 |
## Deployment
|
| 37 |
|