Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Website
Tasks
HuggingChat
Collections
Languages
Organizations
Community
Blog
Posts
Daily Papers
Learn
Discord
Forum
GitHub
Solutions
Team & Enterprise
Hugging Face PRO
Enterprise Support
Inference Providers
Inference Endpoints
Storage Buckets
Log In
Sign Up
1
4
73
TenAI
PRO
honey90
Follow
DreamerHC's profile picture
SeaWolf-AI's profile picture
Anserwise's profile picture
13 followers
Ā·
42 following
AI & ML interests
None yet
Recent Activity
upvoted
a
paper
about 7 hours ago
Darwin Family: MRI-Trust-Weighted Evolutionary Merging for Training-Free Scaling of Language-Model Reasoning
upvoted
an
article
about 10 hours ago
Training-Free Reasoning at 88.89% on GPQA Diamond: How Darwin Family Hit Frontier Scores Without a Single Gradient Step
reacted
to
SeaWolf-AI
's
post
with š„
about 10 hours ago
𧬠Darwin Family: Zero Gradient Steps, GPQA Diamond 88.89% How far can we push LLM reasoning *without* training? Our team at VIDRAFT submitted this paper to Daily Papers yesterday, and it's currently #3. Huge thanks to everyone who upvoted ā sharing the core ideas below. š Paper: https://huggingface.co/papers/2605.14386 š arXiv: https://arxiv.org/abs/2605.14386 š Model: https://huggingface.co/FINAL-Bench/Darwin-28B-Opus --- TL;DR Darwin Family is a training-free evolutionary merging framework. By recombining the weight spaces of existing LLM checkpoints ā with zero gradient-based training ā it reaches frontier-level reasoning. - š Darwin-28B-Opus: GPQA Diamond 88.89% - šø Zero gradient steps ā not a single B200 or H200 hour needed - 𧬠Consistent gains across 4B ā 35B scale - š Cross-architecture breeding between Transformer and Mamba families - š Stable recursive multi-generation evolution #Three Core Mechanisms ā 14-dim Adaptive Merge Genome ā fine-grained recombination at both component level (Attention / FFN / MLP / LayerNorm / Embedding) and block level, expanding the prior evolutionary-merge search space. ā” MRI-Trust Fusion ā we diagnose each layer's reasoning contribution via an **MRI (Model Reasoning Importance)** signal and fuse it with evolutionary search through a **learnable trust parameter**. Trust the diagnostic too much and search collapses; ignore it and search becomes inefficient ā Darwin learns the balance from data. ⢠Architecture Mapper ā weight-space breeding across heterogeneous families. Attention Ć SSM crossover actually works. Why It Matters > Diagnose latent capabilities already encoded in open checkpoints, > and recombine them ā no gradients required. Replies and critiques welcome š
View all activity
Organizations
None yet
spaces
26
Sort:Ā Recently updated
pinned
Sleeping
Agents
Remove Video Background
š
Easily remove your videos background!
pinned
Runtime error
Agents
DALLE 3 XL v2
š„
Sleeping
Agents
WAN 2.1 Fast & security
š„
Running
tenspce
š³
Runtime error
Agents
FLUX LOGO Generator
š
Sleeping
Agents
gradio_workflowbuilder
š
workflow builder
View 26 Spaces
models
1
honey90/TenOS-Ko-28B
Text Generation
ā¢
27B
ā¢
Updated
19 days ago
ā¢
278
datasets
0
None public yet