NaviDriveVLM: Decoupling High-Level Reasoning and Motion Planning for Autonomous Driving
Abstract
NaviDriveVLM presents a decoupled vision-language model framework for autonomous driving that separates high-level reasoning from motion planning, achieving superior performance in end-to-end driving while reducing training costs.
Vision-language models (VLMs) have emerged as a promising direction for end-to-end autonomous driving (AD) by jointly modeling visual observations, driving context, and language-based reasoning. However, existing VLM-based systems face a trade-off between high-level reasoning and motion planning: large models offer strong semantic understanding but are costly to adapt for precise control, whereas small VLM models can be fine-tuned efficiently but often exhibit weaker reasoning. We propose NaviDriveVLM, a decoupled framework that separates reasoning from action generation using a large-scale Navigator and a lightweight trainable Driver. This design preserves reasoning ability, reduces training cost, and provides an explicit interpretable intermediate representation for downstream planning. Experiments on the nuScenes benchmark show that NaviDriveVLM outperforms large VLM baselines in end-to-end motion planning.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MindDriver: Introducing Progressive Multimodal Reasoning for Autonomous Driving (2026)
- HiST-VLA: A Hierarchical Spatio-Temporal Vision-Language-Action Model for End-to-End Autonomous Driving (2026)
- Efficient and Explainable End-to-End Autonomous Driving via Masked Vision-Language-Action Diffusion (2026)
- SteerVLA: Steering Vision-Language-Action Models in Long-Tail Driving Scenarios (2026)
- UniMotion: A Unified Motion Framework for Simulation, Prediction and Planning (2026)
- HERMES: A Holistic End-to-End Risk-Aware Multimodal Embodied System with Vision-Language Models for Long-Tail Autonomous Driving (2026)
- K-Gen: A Multimodal Language-Conditioned Approach for Interpretable Keypoint-Guided Trajectory Generation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper

