Gridnet2
Created on January 27|Last edited on January 28
Comment
4
Run set 2
1
Run set 3
1
Bonus Section 2: Selfplay
We have also tried some selfplay experiments, which is a crucial components in recent work such as AlphaStar (Vinyals et al. 2019). If the agents issue actions via Gridnet, selfplay can be implemented naturally with the parallel environments of PPO. That is, assume there are 2 parallel environments, we can spawn 1 game under the hood and use return player 1 and 2's observation for the first and second parallel environments, respectively and take the player actions respectively.
However, note that the agents in the selfplay experiments are learning to handle both starting locations of the map, which is a different setting. For a fair comparison with other experiments in the main text, the other experiments would also need to be configured to learn with randomized starting locations. Nevertheless, it is fun to see the RL agent fight against itself:
Run set
4
Add a comment