We prepared the 2025 version of the HF AI Timeline Grid, highlighting open vs API-based model releases, and allowing you to browse and filter by access, modality, and release type!
1๏ธโฃ Q1 โ Learning to Reason Deepseek not only releases a top-notch reasoning model, but shows how to train them and compete with closed frontier models. OpenAI debuts Deep Research.
Significant milestones: DeepSeek R1 & R1-Zero, Qwen 2.5 VL, OpenAI Deep Research, Gemini 2.5 Pro (experimental)
2๏ธโฃ Q2 โ Multimodality and Coding More LLMs embrace multimodality by default, and there's a surge in coding agents. Strong vision, audio, and generative models emerge.
Significant milestones: Llama 4, Qwen 3, Imagen 4, OpenAI Codex, Google Jules, Claude 4
3๏ธโฃ Q3 โ "Gold" rush, OpenAI opens up, the community goes bananas Flagship models get gold in Math olympiads and hard benchmarks. OpenAI releases strong open source models and Google releases the much anticipated nano-banana for image generation and editing. Agentic workflows become commonplace.
Significant milestones: Gemini and OpenAI IMO Gold, gpt-oss, Gemini 2.5 Flash Image, Grok 4, Claude Sonnet 4.5
4๏ธโฃ Q4 โ Mistral returns, leaderboard hill-climbing Mistral is back with updated model families. All labs release impressive models to wrap up the year!
Significant milestones: Claude Opus 4.5, DeepSeek Math V2, FLUX 2, GPT 5.1, Kimi K2 Thinking, Nano Banana Pro, GLM 4.7, Gemini 3, Mistral 3, MiniMax M2.1 ๐คฏ
deepseek-ai/DeepSeek-OCR is out! ๐ฅ my take โคต๏ธ > pretty insane it can parse and re-render charts in HTML > it uses CLIP and SAM features concatenated, so better grounding > very efficient per vision tokens/performance ratio > covers 100 languages
Finally, our new paper is out! "๐๐ถ๐ป๐ฒ๐ฉ๐ถ๐๐ถ๐ผ๐ป: ๐ข๐ฝ๐ฒ๐ป ๐๐ฎ๐๐ฎ ๐๐ ๐๐น๐น ๐ฌ๐ผ๐ ๐ก๐ฒ๐ฒ๐ฑ"! ๐ฅณ FineVision: Open Data Is All You Need (2510.17269)
If you've ever trained a VLM, you know this problem: nobody shares their data mixtures. It's a black box, making replicating SOTA work impossible. We wanted to change that.
FineVision unifies 200 sources into 24 million samples. With 17.3 million images and 9.5 billion answer tokens, it's the largest open resource of its kind.
In the paper, we share how we built it: ๐ finding and cleaning data at scale ๐งน removing excessive duplicates across sources ๐ค decontaminating against 66 public benchmarks
My favorite part is Figure 6 (in the video!). It's our visual diversity analysis. It shows that FineVision isn't just bigger; it's more balanced and conceptually richer than other open datasets. NVIDIA's Eagle 2 paper highlighted just how critical this visual diversity is, and our results confirm it: models trained on FineVision consistently outperform those trained on any other open dataset on 11 benchmarks!
๐ To celebrate the paper, Iโm also releasing a concatenated and shuffled version of the full dataset! ๐HuggingFaceM4/FineVision_full_shuffled
Itโs ready to stream, so you can start training your own models right away:
from datasets import load_dataset d = load_dataset("HuggingFaceM4/FineVision_full_shuffled", split="train", streaming=True) print(next(iter(d)))
A big shoutout to the first authors: Luis Wiedmann and Orr Zohar. They are rockstars!
Qwen3-VL-4B is incredibly easy to fine-tune! We've trained the first DSE model based on this model, and it's already performing at the same level as Jina v4!
While Jina Embeddings v4 is built on Qwen2.5-VL-3B (which has a non-commercial license), our model is based on Qwen3-VL-4B and released under Apache 2.0โmaking it fully commercially permissive.
Robonine team we released an open-source 3D-printed parallel gripper designed for robotics applications, compatible with popular budget servos like Feetech STS3215 and Waveshare ST3215.
This precision gripper offers parallel jaw movement, real-time monitoring, and positioning accuracy of ยฑ0.1ยฐ, making it ideal for both robotics enthusiasts and professionals. Complete build cost: Just $69.45โ$74.45, with all components available for purchase on Amazon. Direct links are provided in the Bill of Materials on GitHub.
We encourage you to Watch, Fork, and Star the repository to support our open-source initiative and stay updated on future developments. Your feedback is also welcome!
reacted to tomaarsen's
post with ๐ค๐3 months ago
๐ค Sentence Transformers is joining Hugging Face! ๐ค This formalizes the existing maintenance structure, as I've personally led the project for the past two years on behalf of Hugging Face! Details:
Today, the Ubiquitous Knowledge Processing (UKP) Lab is transferring the project to Hugging Face. Sentence Transformers will remain a community-driven, open-source project, with the same open-source license (Apache 2.0) as before. Contributions from researchers, developers, and enthusiasts are welcome and encouraged. The project will continue to prioritize transparency, collaboration, and broad accessibility.
We see an increasing wish from companies to move from large LLM APIs to local models for better control and privacy, reflected in the library's growth: in just the last 30 days, Sentence Transformer models have been downloaded >270 million times, second only to transformers.
I would like to thank the UKP Lab, and especially Nils Reimers and Iryna Gurevych, both for their dedication to the project and for their trust in myself, both now and two years ago. Back then, neither of you knew me well, yet you trusted me to take the project to new heights. That choice ended up being very valuable for the embedding & Information Retrieval community, and I think this choice of granting Hugging Face stewardship will be similarly successful.
I'm very excited about the future of the project, and for the world of embeddings and retrieval at large!
1 reply
ยท
reacted to Molbap's
post with โค๏ธ๐ฅ4 months ago
๐ New blog: Maintain the unmaintainable โ 1M+ Python LOC, 400+ models
How do you stop a million-line library built by thousands of contributors from collapsing under its own weight? At ๐ค Transformers, we do it with explicit software-engineering tenets, principles that make the codebase hackable at scale.
๐ Inside the post: โ One Model, One File: readability first โ you can still open a modeling file and see the full logic, top to bottom. โ Modular Transformers: visible inheritance that cuts maintenance cost by ~15ร while keeping models readable. โ Config-Driven Performance: FlashAttention, tensor parallelism, and attention scheduling are config-level features, not rewrites.
Written with @lysandre,@pcuenq and @yonigozlan, this is a deep dive into how Transformers stays fast, open, and maintainable.
So ๐DeepSeek๐ hits the mainstream media. But it has been a star in our little cult for at least 6 months. Its meteoric success is not overnight, but two years in the making.
To learn their history, just look at their ๐ค repo
* End of 2023, they launched the first model (pretrained by themselves) following Llama 2 architecture * June 2024, v2 (MoE architecture) surpassed Gemini 1.5, but behind Mistral * September, v2.5 surpassed GPT 4o mini * December, v3 surpassed GPT 4o * Now R1 surpassed o1
Most importantly, if you think DeepSeek success is singular and unrivaled, that's WRONG. The following models are also near or equal the o1 bar.
* Minimax-01 * Kimi k1.5 * Doubao 1.5 pro
1 reply
ยท
reacted to sergiopaniego's
post with ๐ฅ4 months ago
๐ Real-Time On-Device AI Agent with Polaris-4B โ Run It Yourself, No Cloud, No Cost
We just deployed a real-time on-device AI agent using the Polaris-4B-Preview model โ one of the top-performing <6B open LLMs on Hugging Face.
๐ฑ Whatโs remarkable? This model runs entirely on a mobile device, without cloud, and without any manual optimization. It was built using ZETIC.MLange, and the best part?
โก๏ธ Itโs totally automated, free to use, and anyone can do it. You donโt need to write deployment code, tweak backends, or touch device-specific SDKs. Just upload your model โ and ZETIC.MLange handles the rest.
๐ง About the Model - Model: Polaris-4B-Preview - Size: ~4B parameters - Ranking: Top 3 on Hugging Face LLM Leaderboard (<6B) - Tokenizer: Token-incremental inference supported - Modifications: None โ stock weights, just optimized for mobile
โ๏ธ What ZETIC.MLange Does ZETIC.MLange is a fully automated deployment framework for On-Device AI, built for AI engineers who want to focus on models โ not infrastructure.
Hereโs what it does in minutes: - ๐ Analyzes model structure - โ๏ธ Converts to mobile-optimized format (e.g., GGUF, ONNX) - ๐ฆ Generates a runnable runtime environment with pre/post-processing - ๐ฑ Targets real mobile hardware (CPU, GPU, NPU โ including Qualcomm, MediaTek, Apple) - ๐ฏ Gives you a downloadable SDK or mobile app component โ ready to run And yes โ this is available now, for free, at https://mlange.zetic.ai
๐งช For AI Engineers Like You, If you want to: - Test LLMs directly on-device - Run models offline with no latency - Avoid cloud GPU costs - Deploy to mobile without writing app-side inference code
Then this is your moment. You can do exactly what we did, using your own models โ all in a few clicks.
Hugging Face just wrapped 4 months of deep work with AMD to push kernel-level optimization on their MI300X GPUs. Now, it's time to share everything we learned.
Join us in Paris at STATION F for a hands-on weekend of workshops and a hackathon focused on making open-source LLMs faster and more efficient on AMD.
Prizes, amazing host speakers, ... if you want more details, navigate to https://lu.ma/fmvdjmur!
Iโve been running small language models (SLLMs) directly on smartphones โ completely offline, with no cloud backend or server API calls.
I wanted to share: 1. โกย Tokens/sec performance across several SLLMs 2. ๐คย Observations on hardware utilization (where the workload actually runs) 3. ๐ย Trade-offs between model size, latency, and feasibility for mobile apps
There are reports for below models - QWEN3 0.6B - NVIDIA/Nemotron QWEN 1.5B - SimpleScaling S1 - TinyLlama - Unsloth tuned Llama 3.2 1B - Naver HyperClova 0.5B
๐Comparable Benchmark reports (no cloud, all on-device): Iโd really value your thoughts on: - Creative ideas to further optimize inference under these hardware constraints - Other compact LLMs worth testing on-device - Experiences youโve had trying to deploy LLMs at the edge
If thereโs interest, Iโm happy to share more details on the test setup, hardware specs, or the tooling we used for these comparisons.
Thanks for taking a look, and you can build your own through at "https://mlange.zetic.ai"!