Skip to main content

Chilli's group workspace

RDE4MmfNH7YyVbiYT2iHxA

What makes this group special?
Tags

neox-sid-2-3

Notes
Author
State
Finished
Start time
March 7th, 2021 4:00:00 PM
Runtime
5m 8s
Tracked hours
-
Run path
eleutherai/neox/um97zpjp
OS
Linux-5.4.0-54-generic-x86_64-with-glibc2.29
Python version
3.8.5
Git repository
git clone https://github.com/EleutherAI/gpt-neox.git
Git state
git checkout -b "neox-sid-2-3" f666e5ca48a4c95f1c00419dd8561a27cf99858e
Command
pretrain_gpt2.py --local_rank=3 --num-layers 12 --hidden-size 768 --num-attention-heads 12 --max-position-embeddings 2048 --attention-dropout 0 --hidden-dropout 0 --weight-decay 0 --batch-size 4 --checkpoint-activations --checkpoint-num-layers 1 --train-iters 320000 --log-interval 100 --tensorboard-dir /mnt/ssd-cluster/tensorboard --no-weight-tying --pos-emb none --norm rmsnorm --lr-decay-style cosine --lr-decay-iters 320000 --warmup 0.01 --save /mnt/ssd-cluster/checkpoints --save-interval 10000 --keep-last-n-checkpoints 4 --load /mnt/ssd-cluster/checkpoints --model-parallel-size 1 --pipe-parallel-size 3 --distributed-backend nccl --eval-iters 10 --eval-interval 1000 --data-path /mnt/ssd-cluster/data/enron/enron_text_document --split 949,50,1 --vocab-file /mnt/ssd-cluster/data/gpt2-vocab.json --merge-file /mnt/ssd-cluster/data/gpt2-merges.txt --seq-length 2048 --data-impl mmap --log-dir /mnt/ssd-cluster/logs --partition-activations --synchronize-each-layer --wandb_group RDE4MmfNH7YyVbiYT2iHxA --wandb_team eleutherai --deepspeed --fp16 --gas 8 --zero-stage 0 --zero-reduce-scatter --zero-contiguous-gradients --zero-reduce-bucket-size 500000000 --zero-allgather-bucket-size 500000000 --clip-grad 1.0 --lr 0.0006 --adam-beta1 0.9 --adam-beta2 0.95 --adam-eps 1e-08 --momentum 0.0 --deepspeed_config {"train_batch_size":192.0,"train_micro_batch_size_per_gpu":4,"gradient_accumulation_steps":8,"optimizer":{"type":"Adam","params":{"lr":0.0006,"max_grad_norm":1.0,"betas":[0.9,0.95]}},"fp16":{"fp16":true,"enabled":true,"loss_scale":0,"loss_scale_window":1000,"hysteresis":2,"min_loss_scale":1},"gradient_clipping":1.0,"zero_optimization":{"stage":0,"allgather_partitions":true,"allgather_bucket_size":500000000,"overlap_comm":true,"reduce_scatter":true,"reduce_bucket_size":500000000,"contiguous_gradients":true,"cpu_offload":false},"steps_per_print":10,"wall_clock_breakdown":true,"deepspeed":true}
System Hardware
CPU count112
GPU count6
GPU typeA100-PCIE-40GB
W&B CLI Version
0.10.21
Config

Config parameters are your model's inputs. Learn more

  • {} 132 keys
    • 0.9
    • 0.95
    • 0.00000001
    • false
    • 1,000
    • false
    • false
    • 0
    • false
    • 4
    • null
    • false
    • false
    • null
    • true
    • false
    • 1
    • 1
    • false
    • false
    • false
    • "mmap"
    • "/mnt/ssd-cluster/data/enron/enron_text_document"
    • "local"
    • false
    • null
    • true
    • false
    • "{"train_batch_size":192.0,"train_micro_batch_size_per_gpu":4,"gradient_accumulation_steps":8,"optimizer":{"type":"Adam","params":{"lr":0.0006,"max_grad_norm":1.0,"betas":[0.9,0.95]}},"fp16":{"fp16":true,"enabled":true,"loss_scale":0,"loss_scale_window":1000,"hysteresis":2,"min_loss_scale":1},"gradient_clipping":1.0,"zero_optimization":{"stage":0,"allgather_partitions":true,"allgather_bucket_size":500000000,"overlap_comm":true,"reduce_scatter":true,"reduce_bucket_size":500000000,"contiguous_gradients":true,"cpu_offload":false},"steps_per_print":10,"wall_clock_breakdown":true,"deepspeed":true}"
    • false
    • false
    • "nccl"
    • true
    • false
    • 1,000
    • 10
    • null
    • false
    • false
    • true
    • false
    • false
    • 8
    • false
    • 0
    • 768
    • 46 ... 95
      96 ... 127
    • true
    • 500,000,000
    • true
    • 0
Summary

Summary metrics are your model's outputs. Learn more

No summary metrics saved for this run.

Check the summary metrics documentation for more information.