Define "do_sample" explicitly in generation_config.json
#6 opened 8 days ago
by
Corellios
Update config.json
#5 opened 8 days ago
by
Corellios
Update inference examples to use the correct chat template
#4 opened 9 days ago
by
mario-sanz
Endless reasoning loop when serving the model with vLLM
3
#2 opened 12 days ago
by
sliuau