Apiche's group workspace
no_quant_vllm_1
What makes this group special?
Tags
no_quant_vllm_1/finetune
Notes
Author
State
Crashed
Start time
September 19th, 2025 9:24:49 PM
Runtime
2m 1s
Tracked hours
1m 31s
Run path
apiche/pipeline-rl/no_quant_vllm_1_finetune
OS
Linux-5.15.0-1067-nvidia-x86_64-with-glibc2.39
Python version
CPython 3.11.11
Git repository
git clone git@github.com:ServiceNow/pipelinerl.git
Git state
git checkout -b "no_quant_vllm_1/finetune" 88828596361fe13ad27b8fb8cc6cbb58dda41ce0
Command
pipelinerl/entrypoints/run_finetune.py --config-dir results/no_quant_vllm_1/conf --config-name exp_config output_dir=results/no_quant_vllm_1 hydra.run.dir=results/no_quant_vllm_1/finetune +me.weight_update_group_init_method=tcp://localhost:9000 +me.weight_update_group_world_size=3 +me.llm_urls=http://localhost:8080+http://localhost:8081
System Hardware
| CPU count | 112 |
| Logical CPU count | 224 |
| GPU count | 4 |
| GPU type | NVIDIA H100 80GB HBM3 |
W&B CLI Version
0.19.11
Group
no_quant_vllm_1Config
Config parameters are your model's inputs. Learn more
- {} 186 keys▶
- "False"
- "no"
- null
- 1
- 64
- 0
- 64
- 64
- "pipelinerl.domains.math.generate_math_rollout"
- 1
- 10,000,000
- "Please reason step by step, and put your final answer within \boxed{}."
- "{task}"
- 50
- 1
- "nccl"
- "pipelinerl.domains.math.load_datasets"
- "False"
- ""
- true
- null
- false
- "deepspeed_stage3_bf16"
- "DeepSpeedPlugin(hf_ds_config=<accelerate.utils.deepspeed.HfDeepSpeedConfig object at 0x7ffcb2b4b010>, gradient_accumulation_steps=1, gradient_clipping='auto', zero_stage=3, is_train_batch_min=True, offload_optimizer_device='none', offload_param_device='none', offload_optimizer_nvme_path='none', offload_param_nvme_path='none', zero3_init_flag=True, zero3_save_16bit_model=True, transformer_moe_cls_names=None, enable_msamp=False, msamp_opt_level='O1')"
- "cuda:0"
- "DistributedType.DEEPSPEED"
- "TorchDynamoPlugin(backend=<DynamoBackend.NO: 'NO'>, mode='default', fullgraph=False, dynamic=None, options=None, disable=False, use_regional_compilation=False)"
- "pipelinerl.domains.math.MathEnvironment"
- 78,000
- [] 0 items
- 1
- "flash_attention_2"
- false
- "/mnt/llmd/base_models/Qwen2.5-0.5B"
- true
- null
- "tapeagents.finetune.eval.dummy_eval_callback"
- ""
- true
- 64
- true
- 0.3
- "training_data"
- -1
- true
- 0.000001
- 7,777
- 4
- 0
- 1
46 ... 95▶▶96 ... 145▶▶146 ... 181▶▶
Summary
Summary metrics are your model's outputs. Learn more
- {} 69 keys▶
- -0.12004977464675903
- 0
- 0
- 2.1805338859558105
- 0
- 0.99778520337516
- 740.3125
- 0.001207668101415038
- 0
- -241.3638916015625
- 0.6640625
- 0.694843053817749
- -28.31542205810547
- 0
- 0.015625
- -0.75390625
- -0.00000005960464477539
- -65.47980499267578
- 0
- 0.015625
- -2.18660831451416
- 0
- 40,034
- -2.1845479011535645
- 22.68155860900879
- 0.99912828207016
- 40,124.3046875
- 40,034.71875
- 1.0032678842544556
- 1
- -2.1845479011535645
- 0
- 0
- 0
- 0.015625
- 13.122980117797852
- 0.75390625
- 0.12004977464675903
- -0.6640625
- 0.04177408665418625
- 4
- 0
- 20.88269764025069
- 192
- 0.000001
- 0
- 9,566.333333333334
- 6,270.265968364498
- 57,398
- 3,135.132984182249
46 ... 64▶▶
Artifact Outputs
This run produced these artifacts as outputs. Total: 1. Learn more
Type
Name
Consumer count
Loading...