D2C (Data-driven Control Library) is a library for data-driven decision-making & control based on state-of-the-art offline reinforcement learning (RL), offline imitation learning (IL), and offline planning algorithms. It is a platform for solving various decision-making & control problems in real-world scenarios. D2C is designed to offer fast and convenient algorithm performance development and testing, as well as providing easy-to-use toolchains to accelerate the real-world deployment of SOTA data-driven decision-making methods.
The current supported offline RL/IL algorithms include (more to come):
- Twin Delayed DDPG with Behavior Cloning (TD3+BC)
- Distance-Sensitive Offline Reinforcement Learning (DOGE)
- Dynamics-Aware Hybrid Offline-and-Online Reinforcement Learning (H2O)
- Sparse Q-learning (SQL)
- Policy-guided Offline RL (POR)
- Offline Reinforcement Learning with Implicit Q-Learning (IQL)
- Discriminator-Guided Model-Based Offline Imitation Learning (DMIL)
- Behavior Cloning (BC)
D2C includes a large collection of offline RL and IL algorithms: model-free and model-based offline RL/IL algorithms, as well as planning methods.
D2C is highly modular and extensible. You can easily build custom algorithms and conduct experiments with it.
D2C automates the development process in real-world control applications. It simplifies the steps of problem definition/mathematical formulation, policy training, policy evaluation and model deployment.