Seizing Serendipity: Exploiting the Value of Past Success in Off-Policy Actor-Critic

Abstract

Learning high-quality Q-value functions plays a key role in the success of many modern off-policy deep reinforcement learning (RL) algorithms. Previous works focus on addressing the value overestimation issue, an outcome of adopting function approximators and off-policy learning. Deviating from the common viewpoint, we observe that Q-values are indeed underestimated in the latter stage of the RL training process, primarily related to the use of inferior actions from the current policy in Bellman updates as compared to the more optimal action samples in the replay buffer. We hypothesize that this long-neglected phenomenon potentially hinders policy learning and reduces sample efficiency. Our insight to address this issue is to incorporate sufficient exploitation of past successes while maintaining exploration optimism. We propose the Blended Exploitation and Exploration (BEE) operator, a simple yet effective approach that updates Q-value using both historical best-performing actions and the current policy. The instantiations of our method in both model-free and model-based settings outperform state-of-the-art methods in various continuous control tasks and achieve strong performance in failure-prone scenarios and real-world robot tasks.

Publication
In the 41st International Conference on Machine Learning (ICML 2024)
Tianying Ji
Tianying Ji
Research Intern

Tianying Ji is a Ph.D. student at the Tsinghua University. She is broadly interested in reinforcement learning and optimization theory, especially model-based reinforcement learning and offline reinforcement learning.

Yu Luo
Yu Luo
Research Intern
Xianyuan Zhan
Xianyuan Zhan
Faculty Member