Igoro's group workspace
20B_pretrain
What makes this group special?
Tags
new-8-0
Notes
Tags
main
Author
State
Finished
Start time
January 15th, 2022 7:11:44 PM
Runtime
9d 10h 10m 31s
Tracked hours
-
Run path
eleutherai/gpt-thicc/2j1vft05
OS
Linux-5.11.0-34-generic-x86_64-with-glibc2.29
Python version
3.8.10
Git repository
git clone https://github.com/EleutherAI/gpt-neox.git
Git state
git checkout -b "new-8-0" dd23966db7828948db5f1c60739d58a970add0b1
Command
train.py --local_rank=0 --deepspeed_config "{\"train_batch_size\": 1536, \"train_micro_batch_size_per_gpu\": 4, \"gradient_accumulation_steps\": 32, \"optimizer\": {\"type\": \"Adam\", \"params\": {\"lr\": 9.7e-05, \"betas\": [0.9, 0.95], \"eps\": 1e-08}}, \"fp16\": {\"fp16\": true, \"enabled\": true, \"loss_scale\": 0, \"loss_scale_window\": 1000, \"initial_scale_power\": 12, \"hysteresis\": 2, \"min_loss_scale\": 1}, \"gradient_clipping\": 1.0, \"zero_optimization\": {\"stage\": 1, \"allgather_partitions\": true, \"allgather_bucket_size\": 1260000000, \"overlap_comm\": true, \"reduce_scatter\": true, \"reduce_bucket_size\": 1260000000, \"contiguous_gradients\": true, \"cpu_offload\": false}, \"steps_per_print\": 2}" --megatron_config "{\"train_batch_size\": 1536, \"train_micro_batch_size_per_gpu\": 4, \"gradient_accumulation_steps\": 32, \"optimizer\": {\"type\": \"Adam\", \"params\": {\"lr\": 9.7e-05, \"betas\": [0.9, 0.95], \"eps\": 1e-08}}, \"fp16\": {\"fp16\": true, \"enabled\": true, \"loss_scale\": 0, \"loss_scale_window\": 1000, \"initial_scale_power\": 12, \"hysteresis\": 2, \"min_loss_scale\": 1}, \"gradient_clipping\": 1.0, \"zero_optimization\": {\"stage\": 1, \"allgather_partitions\": true, \"allgather_bucket_size\": 1260000000, \"overlap_comm\": true, \"reduce_scatter\": true, \"reduce_bucket_size\": 1260000000, \"contiguous_gradients\": true, \"cpu_offload\": false}, \"steps_per_print\": 2, \"precision\": \"fp16\", \"num_layers\": 44, \"hidden_size\": 6144, \"num_attention_heads\": 64, \"seq_length\": 2048, \"max_position_embeddings\": 2048, \"pos_emb\": \"rotary\", \"no_weight_tying\": true, \"attention_config\": [\"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\", \"global\"], \"sparsity_config\": {}, \"scaled_upper_triang_masked_softmax_fusion\": true, \"bias_gelu_fusion\": true, \"rotary_pct\": 0.25, \"init_method\": \"small_init\", \"output_layer_init_method\": \"wang_init\", \"gpt_j_residual\": true, \"output_layer_parallelism\": \"column\", \"lr_decay_style\": \"cosine\", \"lr_decay_iters\": 150000, \"min_lr\": 9.7e-06, \"optimizer_type\": \"Adam\", \"zero_stage\": 1, \"zero_reduce_scatter\": true, \"zero_contiguous_gradients\": true, \"zero_reduce_bucket_size\": 1260000000, \"zero_allgather_bucket_size\": 1260000000, \"lr\": 9.7e-05, \"tokenizer_type\": \"HFTokenizer\", \"data_path\": \"/mnt/ssd-1/data/pile_20B_tokenizer/pile_20B_tokenizer_text_document\", \"data_impl\": \"mmap\", \"save\": \"/mnt/ssd-1/20B_checkpoints\", \"load\": \"/mnt/ssd-1/20B_checkpoints\", \"save_interval\": 500, \"batch_size\": 4, \"train_iters\": 150000, \"eval_iters\": 10, \"split\": \"995,4,1\", \"vocab_file\": \"/mnt/ssd-1/data/20B_tokenizer.json\", \"attention_dropout\": 0, \"hidden_dropout\": 0, \"checkpoint_activations\": true, \"synchronize_each_layer\": true, \"gas\": 32, \"clip_grad\": 1.0, \"dynamic_loss_scale\": true, \"pipe_parallel_size\": 4, \"model_parallel_size\": 2, \"is_pipe_parallel\": true, \"wandb_group\": \"20B_3vgschbn\", \"wandb_team\": \"eleutherai\", \"wandb_project\": \"gpt-thicc\", \"log_dir\": \"/mnt/ssd-1/logs\", \"tensorboard_dir\": \"/mnt/ssd-1/tensorboard\", \"log_interval\": 2, \"user_script\": \"train.py\", \"global_num_gpus\": 96}"
System Hardware
| CPU count | 128 |
| GPU count | 8 |
| GPU type | NVIDIA A100-SXM4-40GB |
W&B CLI Version
0.10.28
Group
20B_pretrainConfig
Config parameters are your model's inputs. Learn more
- {} 180 keys▶
- "gelu"
- false
- 1,000
- null
- false
- [] 44 items▶
- 0
- false
- 4
- false
- true
- false
- true
- false
- 1
- false
- 1
- false
- "mmap"
- "/mnt/ssd-1/data/pile_20B_tokenizer/pile_20B_tokenizer_text_document"
- false
- null
- true
- true
- false
- false
- "nccl"
- null
- null
- null
- false
- true
- false
- 1,000
- 10
- ""
- null
- null
- null
- false
- null
- {} 7 keys▶
- false
- false
- 32
- "dd23966"
- {} 8 keys▶
- 1,260,000,000
- true
- 1
46 ... 95▶▶96 ... 145▶▶146 ... 175▶▶
Summary
Summary metrics are your model's outputs. Learn more
No summary metrics saved for this run.
Check the summary metrics documentation for more information.