Open RL Benchmark 0.6.0
Open source code, open progress, and open reproducibility
Created on December 18|Last edited on May 27
Comment
Deprecation Notice:
We are deprecating this page in favor of a refreshed version: https://wandb.ai/openrlbenchmark/openrlbenchmark/reportlist. See additional context here: https://github.com/openrlbenchmark/openrlbenchmark
Existing content:
Open RL Benchmark by CleanRL is a comprehensive, interactive and reproducible benchmark of deep Reinforcement Learning (RL) algorithms. It uses Weights and Biases to keep track of the experiment data of popular deep RL algorithms (e.g. DQN, PPO, DDPG, TD3) in a variety of games (e.g. Atari, Mujoco, PyBullet, Procgen, Griddly, MicroRTS). The experiment data includes:
- reproducibility info:
Showcased Environments
Gym-μRTS: Toward Affordable Deep Reinforcement Learning Research in Real-Time Strategy Games
Train agents to play an RTS game with commodity machines (one GPU, three vCPU, 16GB RAM)
Gym-pysc2 Benchmark
https://github.com/vwxyzjn/gym-pysc2
Atari
The Arcade Learning Environment (ALE) is an object-oriented framework that allows researchers to develop AI agents for Atari 2600 games
Slimevolleygym
Slimevolleygym is a very cute and interesting environments to study self play
Procgen and Learning to Generalize in RL
Initial results on OpenAI's Procgen environments using CleanRL's implementations
Mujoco
Results on training agents to perform robotics tasks in Mujoco
Petting zoo
https://www.pettingzoo.ml/
CarRacing-v0
Learning to drive a car with RL
Classic Control
Classic envs like CartPole-v1, Acrobot-v1, LunarLander-v2
PyBullet and Other Continuous Action Tasks
Learn to perform robotics tasks with bullet3 physics engine
Contribution
We want to provide this experience for as many deep RL algorithms and games as possible. If you share this vision, consider checking out our contribution guide.
Add a comment