Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
32
10
94
Massimo Roberto Scamarcia
PRO
mrs83
Follow
ninajamy's profile picture
branikita's profile picture
area514's profile picture
62 followers
Β·
78 following
mrs83
massimoscamarcia
ethicalabs.bsky.social
AI & ML interests
Natural Language Processing, Text Generation, Question Answering, Data Augmentation, Knowledge Transfer, Chain-of-Thought, ResearchOps, MLOps
Recent Activity
updated
a model
about 16 hours ago
ethicalabs/Echo-DSRN-114M-v0.1.2-Base
updated
a model
about 17 hours ago
ethicalabs/Echo-DSRN-114M-v0.1.2
reacted
to
qgallouedec
's
post
with π
1 day ago
TRL v1.3 ships day-one training support for Qwen 3.6 π The new Qwen 3.6 family (`Qwen/Qwen3.6-27B`, `Qwen/Qwen3.6-35B-A3B`) reuses the Qwen3.5-MoE architecture but ships a slightly different chat template, so we updated the stack end-to-end: new training template with `{% generation %}` markers, tool-call response schema routing, tiny test models for the VLM matrix. SFT with assistant-only loss works out of the box: ```python from trl import SFTConfig, SFTTrainer trainer = SFTTrainer( model="Qwen/Qwen3.6-27B", args=SFTConfig(assistant_only_loss=True), train_dataset=dataset, ) trainer.train() ``` So does GRPO tool-calling β just hand `tools=[...]` to `GRPOTrainer`. v1.3 also brings a new experimental TPO trainer (Triple Preference Optimization), speculative decoding in `trl vllm-serve` (Qwen3 MTP / Eagle3 drafts), 12 more KTO β DPO alignment PRs (KTO promotion to stable is now in reach), three more `{% generation %}` chat templates (Gemma/Gemma 2, Phi-3, GLM-4-MoE), and a chunky SFT entropy bug fix. Full release notes: https://github.com/huggingface/trl/releases/tag/v1.3.0
View all activity
Organizations
mrs83
's Spaces
3
Sort:Β Recently updated
Sleeping
ml-intern sandbox
π
Running
Huggingface Static 4d2f8c
π―
Explore data with the interactive Trackio dashboard
Running
Echo-DSRN 114M Telemetry 3D
π
A 3D dashboard for the Echo-DSRN architecture