Apiche's group workspace
debug_gspo12
What makes this group special?
Tags
debug_gspo12/finetune
Notes
Author
State
Crashed
Start time
September 27th, 2025 11:52:56 PM
Runtime
2h 48m 34s
Tracked hours
45m 48s
Run path
apiche/pipeline-rl/debug_gspo12_finetune
OS
Linux-5.15.0-1067-nvidia-x86_64-with-glibc2.39
Python version
CPython 3.11.11
Git repository
git clone git@github.com:ServiceNow/pipelinerl.git
Git state
git checkout -b "debug_gspo12/finetune" 9e31adf0b44882c5c296b7a083de7c1e82353394
Command
pipelinerl/entrypoints/run_finetune.py --config-dir results/debug_gspo12/conf --config-name exp_config output_dir=results/debug_gspo12 hydra.run.dir=results/debug_gspo12/finetune +me.weight_update_group_init_method=tcp://localhost:9000 +me.weight_update_group_world_size=5 +me.llm_urls=http://localhost:8080+http://localhost:8081+http://localhost:8082+http://localhost:8083
System Hardware
| CPU count | 112 |
| Logical CPU count | 224 |
| GPU count | 8 |
| GPU type | NVIDIA H100 80GB HBM3 |
W&B CLI Version
0.19.11
Group
debug_gspo12Config
Config parameters are your model's inputs. Learn more
- {} 186 keys▶
- "False"
- "no"
- null
- 1
- 64
- 0
- 64
- 64
- "pipelinerl.domains.math.generate_math_rollout"
- 1
- 10,000,000
- "Please reason step by step, and put your final answer within \boxed{}."
- ""
- "{task}"
- 50
- 1
- "nccl"
- "pipelinerl.domains.math.load_datasets"
- "False"
- ""
- true
- null
- false
- "deepspeed_stage3_bf16"
- "DeepSpeedPlugin(hf_ds_config=<accelerate.utils.deepspeed.HfDeepSpeedConfig object at 0x7ffc246da510>, gradient_accumulation_steps=1, gradient_clipping='auto', zero_stage=3, is_train_batch_min=True, offload_optimizer_device='none', offload_param_device='none', offload_optimizer_nvme_path='none', offload_param_nvme_path='none', zero3_init_flag=True, zero3_save_16bit_model=True, transformer_moe_cls_names=None, enable_msamp=False, msamp_opt_level='O1')"
- "cuda:0"
- "DistributedType.DEEPSPEED"
- "TorchDynamoPlugin(backend=<DynamoBackend.NO: 'NO'>, mode='default', fullgraph=False, dynamic=None, options=None, disable=False, use_regional_compilation=False)"
- "pipelinerl.domains.math.MathEnvironment"
- 0
- [] 0 items
- 1
- "flash_attention_2"
- false
- "Qwen/Qwen2.5-0.5B"
- true
- null
- "tapeagents.finetune.eval.dummy_eval_callback"
- ""
- true
- 16
- true
- 0.3
- "training_data"
- -1
- true
- 7,777
- 1
- 0
- 1
46 ... 95▶▶96 ... 145▶▶146 ... 181▶▶
Summary
Summary metrics are your model's outputs. Learn more
- {} 65 keys▶
- -12.359375
- 0
- 0.01837158203125
- 0
- 0.9999998455639107
- 99.25
- 0.00000061094760894775
- 0
- 0.00000063329935073853
- -189.3769073486328
- -13.125
- 0.00000095367431640625
- -25.255136489868164
- 0
- 0.0625
- -13.1875
- 0.00000023841857910156
- -75.77116394042969
- 0
- 0.0625
- -0.0015120506286621094
- 15
- -0.0004391679249238223
- 0.9364277124404908
- 14.96570873260498
- 14.982843399047852
- 0.9385735392570496
- 0.9375
- -0.0004391679249238223
- 0
- 0.05859375
- 81.468994140625
- 13.1875
- 12.359375
- 13.125
- 162.93798828125
- 1,141
- 0
- 53.05396074183922
- 304
- 0.000001
- 17,936
- 17,936
- 1,145
- 1
- 18,256
- 397
- 2,659.8437979086425
- 1,588
- 664.9609494771606
46 ... 60▶▶
Artifact Outputs
This run produced these artifacts as outputs. Total: 1. Learn more
Type
Name
Consumer count
Loading...