RynnVLA-002: A Unified Vision-Language-Action and World Model Paper • 2511.17502 • Published 18 days ago • 24
Scaling Language-Centric Omnimodal Representation Learning Paper • 2510.11693 • Published Oct 13 • 100
High-Fidelity Simulated Data Generation for Real-World Zero-Shot Robotic Manipulation Learning with Gaussian Splatting Paper • 2510.10637 • Published Oct 12 • 12
MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources Paper • 2509.21268 • Published Sep 25 • 103
MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources Paper • 2509.21268 • Published Sep 25 • 103
RynnVLA-001 Collection Using Human Demonstrations to Improve Robot Manipulation • 3 items • Updated Sep 19 • 2
RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation Paper • 2509.15212 • Published Sep 18 • 21
view reply Hi, we have released the tech report: https://arxiv.org/pdf/2509.15212 Thanks for your interest in our work!
RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation Paper • 2509.15212 • Published Sep 18 • 21
Towards Affordance-Aware Robotic Dexterous Grasping with Human-like Priors Paper • 2508.08896 • Published Aug 12 • 10
view article Article RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation Aug 11 • 28
view article Article RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation Aug 11 • 28
LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization Paper • 2502.13922 • Published Feb 19 • 28
VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video Understanding Paper • 2501.13106 • Published Jan 22 • 90