Regression Report: train_reward_accelerate
[['?we=openrlbenchmark&wpn=lm-human-preferences&xaxis=_step&ceik=task_id&cen=task.value.policy.initial_model&metrics=train_reward/minibatch/error', '124M'], ['?we=openrlbenchmark&wpn=lm_human_preference_details&xaxis=_step&ceik=label_dataset&cen=exp_name&metrics=train/loss', 'train_reward_accelerate?tag=v0.1.0-68-g2f3aa38&tag=tf_adam&tag=gpt2&cl=tf_adam,gpt2'], ['?we=tliu&wpn=cleanrl&xaxis=_step&ceik=label_dataset&cen=exp_name&metrics=train/loss', 'train_reward_jax', 'train_reward_accelerate']]
Created on August 27|Last edited on August 27
Comment
openrlbenchmark/lm-human-preferences/124M ({})
40
tf_adam,gpt2
10
tliu/cleanrl/train_reward_jax ({})
tliu/cleanrl/train_reward_accelerate ({})
Add a comment