Kastan's group workspace
Aug-05__11:05
What makes this group special?
Tags
q_allFP32_gpt_8B_PP4_TP4_25d
Notes
Tags
Aug-05__11:05
BATCH_SIZE8
NUM_EPOCHS=3
NUM_MICRO_BATCHES=8
SLURM=513345
TP=4
WORLD_SIZE=32
Author
State
Failed
Start time
August 5th, 2022 4:05:39 PM
Runtime
23s
Tracked hours
15s
Run path
kastan/LLM-Distributed-Quantization/7bcoa2uv
OS
Linux-4.18.0-305.49.1.el8_4.x86_64-x86_64-with-glibc2.28
Python version
3.9.12
Command
/u/kastanday/LLM-Distributed-Quantization/benchmarks/gpt/v2_train.py --config /u/kastanday/LLM-Distributed-Quantization/benchmarks/gpt/configs/q_allFP32_gpt_8B_PP4_TP4_25d.py --host gpub007 --port 29500 --world_size 32 --rank 21
System Hardware
| CPU count | 64 |
| GPU count | 4 |
| GPU type | NVIDIA A40 |
W&B CLI Version
0.13.0
Group
Aug-05__11:05Config
Config parameters are your model's inputs. Learn more
- {} 23 keys▶
- 8
- 1
- "/u/kastanday/LLM-Distributed-Quantization/datasets/small-gpt-dataset.json"
- {} 1 key▶
- "AMP_TYPE.NAIVE"
- 4
- 0.00015
- "./quant_gpt2_2.5d_tp4_bs8_lr0.00015/"
- {} 1 key▶
- "titans.loss.lm_loss.gpt_lmloss.GPTLMLoss"
- {} 7 keys▶
- {} 4 keys▶
- "torch.float32"
- "torch.float32"
- "torch.bfloat16"
- "torch.float32"
- 3
- 8
- {} 2 keys▶
- 0.00015
- 0.01
- {} 2 keys▶
- 4
- {} 3 keys▶
- 1
- "2.5d"
- 4
- "titans.model.quant_gpt.quant_gpt.quant_gpt2_8B"
- "titans.model.quant_gpt.quant_gpt.quant_gpt2_xl"
- 1,024
- "2.5d"
- 4
- 32
- 50,304
- 1
- 0.01