Tools & Libraries

Generated by Microsoft Designer

We provide open code implementations for most of our research, please check our papers for related codes. In addition, we aim to develop easy-to-use and comprehensive algorithm libraries and tools to accelerate the real-world deployment of advanced data-driven decision-making methods.

Data-Drivien Decision-Making Libraries / Tools

screen reader text

Data-Driven Control Lib (D2C) is a library for data-driven decision-making & control based on state-of-the-art offline reinforcement learning (RL), offline imitation learning (IL), and offline planning algorithms. It is a platform for solving various decision-making & control problems in real-world scenarios. D2C is designed to offer fast and convenient algorithm performance development and testing, as well as providing easy-to-use toolchains to accelerate the real-world deployment of SOTA data-driven decision-making methods.

The current supported offline RL/IL algorithms include (more to come):


  • D2C includes a large collection of offline RL and IL algorithms: model-free and model-based offline RL/IL algorithms, as well as planning methods.
  • D2C is highly modular and extensible. You can easily build custom algorithms and conduct experiments with it.
  • D2C automates the development process in real-world control applications. It simplifies the steps of problem definition/mathematical formulation, policy training, policy evaluation and model deployment.

Library Information:

Online RL Library

OneRL: Event-driven fully distributed reinforcement learning framework proposed in “A Versatile and Efficient Reinforcement Learning Approach for Autonomous Driving” ( that can facilitate highly efficient policy learning in RL-based tasks.

  • Super fast RL training! (15~30min for MuJoCo & Atari on single machine)
  • State-of-the-art performance
  • Scheduled and pipelined sample collection
  • Completely lock-free execution
  • Fully distributed architecture
  • Full profiling & overhead identification tools
  • Online visualization & rendering
  • Support multi-GPU parallel training
  • Support exporting trained policy to ONNX for faster inference & deployment
Xianyuan Zhan
Xianyuan Zhan
Faculty Member