Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Open to Collab
36
2
49
Troy Schultz
PRO
TroyDoesAI
Follow
sroecker's profile picture
rityak's profile picture
bullerwins's profile picture
201 followers
Β·
87 following
TroyDoesAI
troyandrewschultz
AI & ML interests
Contact Me ~ Open For Work ~ Contract or W2
Recent Activity
reacted
to
SeaWolf-AI
's
post
with π₯
27 days ago
𧬠Darwin-35B-A3B-Opus β The Child That Surpassed Both Parents What if a merged model could beat both its parents? We proved it can. Darwin-35B-A3B-Opus is a 35B MoE model (3B active) built with our Darwin V5 engine β the first evolution system that CT-scans parent models before merging them. π€ Model: https://huggingface.co/FINAL-Bench/Darwin-35B-A3B-Opus The result speaks for itself: GPQA Diamond 90.0%, versus Father (Qwen3.5-35B-A3B) at 84.2% and Mother (Claude 4.6 Opus Distilled) at 85.0%. That's +6.9% over Father and +5.9% over Mother. Not a tradeoff β a genuine leap. Meanwhile, MMMLU sits at 85.0% (Father: 85.2%), multimodal is fully intact, and all 201 languages are preserved. How? Model MRI changed everything. Traditional merging is guesswork. Darwin V4 added evolution. Darwin V5 added X-ray vision. Model MRI scans each parent layer by layer and discovers: Mother's L34βL38 is the reasoning engine (peak cosine distance), 50β65% of Mother's experts are dead (killed by text-only distillation), and Father is a healthy generalist with every expert alive. The prescription: transplant Mother's reasoning brain at L38 (90% weight), replace her dead experts with Father's living ones, and let Father's router handle the output layer. Reasoning went up. Versatility stayed intact. No tradeoff β just evolution. 35B total, 3B active (MoE) Β· GPQA Diamond 90.0% Β· MMMLU 85.0% (201 languages) Β· Multimodal Image & Video Β· 262K native context Β· 147.8 tok/s on H100 Β· Runs on a single RTX 4090 (Q4) Β· Apache 2.0 Darwin V5's full algorithm and technical details will be released alongside an upcoming paper. π Live Demo: https://huggingface.co/spaces/FINAL-Bench/Darwin-35B-A3B-Opus π FINAL Bench Leaderboard: https://huggingface.co/spaces/FINAL-Bench/Leaderboard π ALL Bench Leaderboard: https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard Built by VIDRAFT Β· Supported by the Korean Government GPU Support Program
new
activity
27 days ago
blascotobasco/Mistral-NeMoE-12B-16E:
Wow what a cool experiment
liked
a model
27 days ago
blascotobasco/Mistral-NeMoE-12B-16E
View all activity
Organizations
TroyDoesAI
's datasets
3
Sort:Β Recently updated
TroyDoesAI/Cobol_Unfiltered
Preview
β’
Updated
Dec 8, 2025
β’
20
TroyDoesAI/deep-ml-alpaca-instruct
Viewer
β’
Updated
Nov 1, 2024
β’
66
β’
10
β’
1
TroyDoesAI/Cobol
Viewer
β’
Updated
Jun 13, 2024
β’
2.98k
β’
8