Skip to main content

Eleutherai-oslo's group workspace

Timestamps visible
2023-04-20 02:09:46
Traceback (most recent call last):
2023-04-20 02:09:46
  File "/fsx/polyglot.train/gpt-neox/train.py", line 27, in <module>
2023-04-20 02:09:46
    pretrain(neox_args=neox_args)
2023-04-20 02:09:46
  File "/fsx/polyglot.train/gpt-neox/megatron/training.py", line 104, in pretrain
2023-04-20 02:09:46
    model, optimizer, lr_scheduler = setup_model_and_optimizer(
2023-04-20 02:09:46
  File "/fsx/polyglot.train/gpt-neox/megatron/training.py", line 440, in setup_model_and_optimizer
2023-04-20 02:09:46
    optimizer, param_groups = get_optimizer(model=model, neox_args=neox_args)
2023-04-20 02:09:46
  File "/fsx/polyglot.train/gpt-neox/megatron/training.py", line 382, in get_optimizer
2023-04-20 02:09:46
    optimizer = adam_optimizer(
2023-04-20 02:09:46
  File "/fsx/kevin.ai/Anaconda/envs/polyglot/lib/python3.9/site-packages/apex/optimizers/fused_adam.py", line 80, in __init__
2023-04-20 02:09:46
    raise RuntimeError('apex.optimizers.FusedAdam requires cuda extensions')
2023-04-20 02:09:46
RuntimeError: apex.optimizers.FusedAdam requires cuda extensions
2023-04-20 02:09:46
[2023-04-20 02:09:43,905] [INFO] [distributed.py:46:init_distributed] Initializing torch distributed with backend: nccl
2023-04-20 02:09:46
make: Entering directory '/fsx/polyglot.train/gpt-neox/megatron/data'
2023-04-20 02:09:46
make: Nothing to be done for 'default'.
2023-04-20 02:09:46
make: Leaving directory '/fsx/polyglot.train/gpt-neox/megatron/data'