7B AWQ
Collection
These models are selected for their compatibility with small 12GB memory GPUs. • 204 items • Updated
• 2
Darcy-7b is a merge of the following models using LazyMergekit.
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by: