Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Alcoft
/
DeepSeek-R1-Distill-Qwen-1.5B-GGUF
like
0
Text Generation
GGUF
conversational
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
208
GGUF
Model size
2B params
Architecture
qwen2
Chat template
Hardware compatibility
Log In
to view the estimation
2-bit
Q2_K
753 MB
3-bit
Q3_K_S
861 MB
Q3_K_M
924 MB
Q3_K_L
980 MB
4-bit
Q4_K_S
1.07 GB
Q4_K_M
1.12 GB
5-bit
Q5_K_S
1.26 GB
Q5_K_M
1.29 GB
6-bit
Q6_K
1.46 GB
8-bit
Q8_0
1.89 GB
16-bit
BF16
3.56 GB
F16
3.56 GB
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for
Alcoft/DeepSeek-R1-Distill-Qwen-1.5B-GGUF
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
Quantized
(
236
)
this model
Collection including
Alcoft/DeepSeek-R1-Distill-Qwen-1.5B-GGUF
TAO71-AI Quants: Other
Collection
8 items
•
Updated
Jul 26