Skip to main content

Open RL Benchmark 0.6.0

Open source code, open progress, and open reproducibility
Created on December 18|Last edited on May 27

Deprecation Notice:

We are deprecating this page in favor of a refreshed version: https://wandb.ai/openrlbenchmark/openrlbenchmark/reportlist. See additional context here: https://github.com/openrlbenchmark/openrlbenchmark

Existing content:

Open RL Benchmark by CleanRL is a comprehensive, interactive and reproducible benchmark of deep Reinforcement Learning (RL) algorithms. It uses Weights and Biases to keep track of the experiment data of popular deep RL algorithms (e.g. DQN, PPO, DDPG, TD3) in a variety of games (e.g. Atari, Mujoco, PyBullet, Procgen, Griddly, MicroRTS). The experiment data includes:

Showcased Environments




Contribution

We want to provide this experience for as many deep RL algorithms and games as possible. If you share this vision, consider checking out our contribution guide.