AI & ML interests

None defined yet.

Recent Activity

sergiopaniego 
posted an update about 20 hours ago
view post
Post
900
ICYMI, you can fine-tune open LLMs using Claude Code

just tell it:
“Fine-tune Qwen3-0.6B on open-r1/codeforces-cots”

and Claude submits a real training job on HF GPUs using TRL.

it handles everything:
> dataset validation
> GPU selection
> training + Trackio monitoring
> job submission + cost estimation
when it’s done, your model is on the Hub, ready to use

read more about the process: https://huggingface.co/blog/hf-skills-training
sergiopaniego 
posted an update 1 day ago
view post
Post
874
We just released TRL v0.26.0!

It comes packed with updates:
> Agent training with tools in GRPO
> New CISPO & SAPO losses + reasoning rewards
> vLLM quantization in colocate mode
> Dataset shuffling in SFT
> Lots of NEW examples
> Tons of fixes and documentation improvements

  • 3 replies
·
sergiopaniego 
posted an update 2 days ago
sergiopaniego 
posted an update 6 days ago
view post
Post
2770
Want to get started with fine-tuning but don’t know where to begin? 🤓☝️

We’re expanding our collection of beginner-friendly free Colab notebooks so you can learn and fine-tune models using TRL at no cost

🔬 Check out the full list of free notebooks: https://huggingface.co/docs/trl/main/en/example_overview#notebooks

🔬 If you want more advanced content, we also have a lot to cover in the community tutorials: https://huggingface.co/docs/trl/community_tutorials

And now the obvious question: what would you like us to add next?
sergiopaniego 
posted an update 8 days ago
view post
Post
2314
NEW: @mistralai released a fantastic family of multimodal models, Ministral 3.

You can fine-tune them for free on Colab using TRL ⚡️, supporting both SFT and GRPO

Link to the notebooks:
- SFT: https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/sft_ministral3_vl.ipynb
- GRPO: https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/grpo_ministral3_vl.ipynb
- TRL and more examples: https://huggingface.co/docs/trl/index
  • 2 replies
·
sergiopaniego 
posted an update 9 days ago
sergiopaniego 
posted an update 10 days ago
view post
Post
3077
want to use open models easily through an API?

Inference Providers might be exactly what you’re looking for sooo here’s a complete beginner-friendly walkthrough 🧐

https://www.youtube.com/watch?v=oxwsizy1Spw
  • 2 replies
·
sergiopaniego 
posted an update 14 days ago
view post
Post
1725
nanochat is now in transformers!

The LLM by @karpathy is officially in the library, and we wrote a blog covering: how did we port the model, differences from the original, and how to run or train it.

go read it 🤓

nanochat-students/transformers
sergiopaniego 
posted an update 16 days ago
sergiopaniego 
posted an update 17 days ago
sergiopaniego 
posted an update 21 days ago
sergiopaniego 
posted an update 22 days ago
view post
Post
2581
we've just added several example scripts to TRL showing how to train models with GRPO using some of the new OpenEnv environments

train a model to interact with a browser (🎮 BrowserGym Env), play Wordle (🎮 Wordle Env) and moooore!

TRL (GRPO + vLLM) + OpenEnv! ⚡️

📝 go play with them: https://github.com/huggingface/trl/tree/main/examples/scripts/openenv

📝 examples list: https://huggingface.co/docs/trl/main/en/example_overview#scripts
sergiopaniego 
posted an update 24 days ago
lunarflu 
posted an update about 1 month ago
lunarflu 
posted an update about 1 month ago
lunarflu 
posted an update about 1 month ago
view post
Post
2677
💸🤑You don’t need 100 GPUs to train something amazing!

Our Smol Training Playbook teaches you a better path to world-class LLMs, for free!

Check out the #1 trending space on 🤗 :
HuggingFaceTB/smol-training-playbook
AdinaY 
posted an update about 1 month ago
view post
Post
3254
Kimi K2 Thinking is now live on the hub 🔥

moonshotai/Kimi-K2-Thinking

✨ 1T MoE for deep reasoning & tool use
✨ Native INT4 quantization = 2× faster inference
✨ 256K context window
✨ Modified MIT license