sebulba througput
Created on January 27|Last edited on January 27
Comment
# python sebulba_ppo_envpool_new.py --actor-device-ids 0 --learner-device-ids 1 2 --update-epochs 8 --track
This confirms if the learner uses the same GPU that the actor is using, the througput is slowed. It also shows we just need threading, and multiprocessing is not necessary.
--actor-device-ids 0 --learner-device-ids 1 2
1
--actor-device-ids 0 --learner-device-ids 0 1
1
1
1
sebulba_ppo_envpool (lambdalabs pmap 4GPU)
0
1
1
1
1
sebulba_ppo_envpool (lambdalabs pmap 4GPU)
1
sebulba_ppo_envpool (lambdalabs pmap 2GPU)
1
sebulba_ppo_envpool (lambdalabs pmap 8GPU)
1
sebulba_ppo_envpool (lambdalabs pmap 1GPU a, 4GPU l)
1
Add a comment