LFM2 is a new generation of hybrid models, designed for on-device deployment.
AI & ML interests
A new generation of foundation models from first principles.
Recent Activity
View all activity
-
LiquidAI/LFM2.5-VL-1.6B
Image-Text-to-Text β’ 2B β’ Updated β’ 124k β’ 250 -
LFM2.5-VL-1.6B WebGPU
π§73In-browser vision-language inference with LFM2.5-VL-1.6B
-
LiquidAI/LFM2.5-VL-1.6B-GGUF
Image-Text-to-Text β’ 1B β’ Updated β’ 157k β’ 67 -
LiquidAI/LFM2.5-VL-1.6B-ONNX
Image-Text-to-Text β’ Updated β’ 981 β’ 25
Library of task-specific models: https://www.liquid.ai/blog/introducing-liquid-nanos-frontier-grade-performance-on-everyday-devices
-
LiquidAI/LFM2-1.2B-Extract
Text Generation β’ 1B β’ Updated β’ 16.8k β’ 106 -
LiquidAI/LFM2-350M-Extract
Text Generation β’ 0.4B β’ Updated β’ 361 β’ 77 -
LiquidAI/LFM2-350M-ENJP-MT
Translation β’ 0.4B β’ Updated β’ 310 β’ 87 -
LiquidAI/LFM2-1.2B-RAG
Text Generation β’ Updated β’ 752 β’ 112
End-to-end audio foundation model, designed for low latency and real-time conversations
Collection of Instruct, Base, and Japanese LFM2.5-1.2B models.
-
LiquidAI/LFM2.5-1.2B-Thinking
Text Generation β’ Updated β’ 32.8k β’ 311 -
LiquidAI/LFM2.5-1.2B-Instruct
Text Generation β’ 1B β’ Updated β’ 206k β’ 527 -
LiquidAI/LFM2.5-1.2B-JP
Text Generation β’ 1B β’ Updated β’ 3.18k β’ 140 -
LiquidAI/LFM2.5-1.2B-Base
Text Generation β’ Updated β’ 34.7k β’ 116
LFM2-VL is our first series of vision-language models, designed for on-device deployment.
-
LiquidAI/LFM2-VL-3B
Image-Text-to-Text β’ 3B β’ Updated β’ 11.6k β’ 132 -
LiquidAI/LFM2-VL-1.6B
Image-Text-to-Text β’ 2B β’ Updated β’ 4.07k β’ 224 -
LiquidAI/LFM2-VL-450M
Image-Text-to-Text β’ 0.5B β’ Updated β’ 17.9k β’ 145 -
LiquidAI/LFM2-VL-3B-GGUF
Image-Text-to-Text β’ 3B β’ Updated β’ 79.9k β’ 34
LFM2 is a new generation of hybrid models, designed for on-device deployment.
Collection of Instruct, Base, and Japanese LFM2.5-1.2B models.
-
LiquidAI/LFM2.5-1.2B-Thinking
Text Generation β’ Updated β’ 32.8k β’ 311 -
LiquidAI/LFM2.5-1.2B-Instruct
Text Generation β’ 1B β’ Updated β’ 206k β’ 527 -
LiquidAI/LFM2.5-1.2B-JP
Text Generation β’ 1B β’ Updated β’ 3.18k β’ 140 -
LiquidAI/LFM2.5-1.2B-Base
Text Generation β’ Updated β’ 34.7k β’ 116
-
LiquidAI/LFM2.5-VL-1.6B
Image-Text-to-Text β’ 2B β’ Updated β’ 124k β’ 250 -
LFM2.5-VL-1.6B WebGPU
π§73In-browser vision-language inference with LFM2.5-VL-1.6B
-
LiquidAI/LFM2.5-VL-1.6B-GGUF
Image-Text-to-Text β’ 1B β’ Updated β’ 157k β’ 67 -
LiquidAI/LFM2.5-VL-1.6B-ONNX
Image-Text-to-Text β’ Updated β’ 981 β’ 25
Library of task-specific models: https://www.liquid.ai/blog/introducing-liquid-nanos-frontier-grade-performance-on-everyday-devices
-
LiquidAI/LFM2-1.2B-Extract
Text Generation β’ 1B β’ Updated β’ 16.8k β’ 106 -
LiquidAI/LFM2-350M-Extract
Text Generation β’ 0.4B β’ Updated β’ 361 β’ 77 -
LiquidAI/LFM2-350M-ENJP-MT
Translation β’ 0.4B β’ Updated β’ 310 β’ 87 -
LiquidAI/LFM2-1.2B-RAG
Text Generation β’ Updated β’ 752 β’ 112
LFM2-VL is our first series of vision-language models, designed for on-device deployment.
-
LiquidAI/LFM2-VL-3B
Image-Text-to-Text β’ 3B β’ Updated β’ 11.6k β’ 132 -
LiquidAI/LFM2-VL-1.6B
Image-Text-to-Text β’ 2B β’ Updated β’ 4.07k β’ 224 -
LiquidAI/LFM2-VL-450M
Image-Text-to-Text β’ 0.5B β’ Updated β’ 17.9k β’ 145 -
LiquidAI/LFM2-VL-3B-GGUF
Image-Text-to-Text β’ 3B β’ Updated β’ 79.9k β’ 34
End-to-end audio foundation model, designed for low latency and real-time conversations