Volko
Volko76
AI & ML interests
Quantization, Fine-tune, Agentic Frameworks
Recent Activity
liked a model 1 day ago
Volko76/DeepSeek-V4-Flash-GGUF updated a model 1 day ago
Volko76/DeepSeek-V4-Flash-GGUF published a model 1 day ago
Volko76/DeepSeek-V4-Flash-GGUFOrganizations
I would love to see the iq2xxs
👍 1
1
#1 opened 2 days ago
by
Volko76
128gb seems suspiciouly low
1
#1 opened 8 days ago
by
Volko76
Thanks a lot for releasing it as open source MIT licence !
❤️ 4
#4 opened 9 days ago
by
Volko76
Interested
#1 opened about 1 month ago
by
Volko76
Issues when loading the model with ollama
#1 opened 11 months ago
by
Volko76
Thanks a lot
#7 opened 11 months ago
by
Volko76
Thaaaaaanks !!!!
🤗 1
#1 opened about 1 year ago
by
Volko76
Thanks a lot for this release
🔥 3
#19 opened about 1 year ago
by
Volko76
Improve language tag
#1 opened about 1 year ago
by
lbourdois
Improve language tag
#1 opened about 1 year ago
by
lbourdois
Improve language tag
#1 opened about 1 year ago
by
lbourdois
Improve language tag
#1 opened about 1 year ago
by
lbourdois
Improve language tag
#1 opened about 1 year ago
by
lbourdois
Improve language tag
#1 opened about 1 year ago
by
lbourdois
Improve language tag
#1 opened about 1 year ago
by
lbourdois
Improve language tag
1
#1 opened about 1 year ago
by
lbourdois
A milestone
👍 2
1
#4 opened over 1 year ago
by
jiangxg
Performances
3
#2 opened over 1 year ago
by
pierrealex
Does not work. The model size is wrong too. 1.5B x 5.5BPW should be ~1.6GB
4
#1 opened over 1 year ago
by
imoc
I will soon create a quantized version of Lucie for people who don't know how to do it
5
#1 opened over 1 year ago
by
Volko76