Skip to main content

Eleutherai-oslo's group workspace

Timestamps visible
2023-04-20 01:57:37
[2023-04-20 01:57:34,812] [INFO] [distributed.py:46:init_distributed] Initializing torch distributed with backend: nccl
2023-04-20 01:57:37
make: Entering directory '/fsx/polyglot.train/gpt-neox/megatron/data'
2023-04-20 01:57:37
make: Nothing to be done for 'default'.
2023-04-20 01:57:37
make: Leaving directory '/fsx/polyglot.train/gpt-neox/megatron/data'
2023-04-20 01:57:37
Traceback (most recent call last):
2023-04-20 01:57:37
  File "/fsx/polyglot.train/gpt-neox/train.py", line 27, in <module>
2023-04-20 01:57:37
    pretrain(neox_args=neox_args)
2023-04-20 01:57:37
  File "/fsx/polyglot.train/gpt-neox/megatron/training.py", line 104, in pretrain
2023-04-20 01:57:37
    model, optimizer, lr_scheduler = setup_model_and_optimizer(
2023-04-20 01:57:37
  File "/fsx/polyglot.train/gpt-neox/megatron/training.py", line 440, in setup_model_and_optimizer
2023-04-20 01:57:37
    optimizer, param_groups = get_optimizer(model=model, neox_args=neox_args)
2023-04-20 01:57:37
  File "/fsx/polyglot.train/gpt-neox/megatron/training.py", line 382, in get_optimizer
2023-04-20 01:57:37
    optimizer = adam_optimizer(
2023-04-20 01:57:37
  File "/fsx/kevin.ai/Anaconda/envs/polyglot/lib/python3.9/site-packages/apex/optimizers/fused_adam.py", line 80, in __init__
2023-04-20 01:57:37
    raise RuntimeError('apex.optimizers.FusedAdam requires cuda extensions')
2023-04-20 01:57:37
RuntimeError: apex.optimizers.FusedAdam requires cuda extensions