Diffusion-Based Planning for Autonomous Driving with Flexible Guidance

Abstract

Achieving human-like driving behaviors in complex open-world environments is a critical challenge in autonomous driving. Contemporary learning-based planning approaches such as imitation learning methods often struggle to balance competing objectives and lack of safety assurance,due to limited adaptability and inadequacy in learning complex multi-modal behaviors commonly exhibited in human planning, not to mention their strong reliance on the fallback strategy with predefined rules. We propose a novel transformer-based Diffusion Planner for closed-loop planning, which can effectively model multi-modal driving behavior and ensure trajectory quality without any rule-based refinement. Our model supports joint modeling of both prediction and planning tasks under the same architecture, enabling cooperative behaviors between vehicles. Moreover, by learning the gradient of the trajectory score function and employing a flexible classifier guidance mechanism, Diffusion Planner effectively achieves safe and adaptable planning behaviors. Evaluations on the large-scale real-world autonomous planning benchmark nuPlan and our newly collected 200-hour delivery-vehicle driving dataset demonstrate that Diffusion Planner achieves state-of-the-art closed-loop performance with robust transferability in diverse driving styles.

Publication
The Thirteenth International Conference on Learning Representations (ICLR 2025)
Yinan Zheng
Yinan Zheng
PhD Student
Jinliang Zheng
Jinliang Zheng
PhD Student
Liyuan Mao
Liyuan Mao
Undergraduate student at Shanghai Jiao Tong University

My research interests include reinforcement learning, especially offline reinforcement learning and imitation learning.

Jianxiong Li
Jianxiong Li
PhD Student
Xianyuan Zhan
Xianyuan Zhan
Faculty Member