Skip to main content

Assenmacher-mat's group workspace

roberta_fulltraining_fnc1

What makes this group special?
Tags

roberta_fulltraining_fnc1#5

Notes
Author
State
Finished
Start time
February 20th, 2023 1:56:59 AM
Runtime
1h 52m 39s
Tracked hours
1h 52m 33s
Run path
dal-nlp/active-glae_old/cjncj635
OS
Linux-4.15.0-202-generic-x86_64-with-glibc2.27
Python version
3.9.13
Command
/mnt/stud/home/mwirth/projekts/dal-toolbox/experiments/active_learning/al_txt.py model=roberta dataset=fnc1 output_dir=/mnt/work/glae/glae-results/fnc1/roberta/fulltraining/seed5 random_seed=5 al_strategy=fulltraining al_cycle.n_init=0 al_cycle.acq_size=0 al_cycle.n_acq=0 wandb.group=roberta_fulltraining_fnc1
System Hardware
CPU count48
Logical CPU count 96
GPU count1
GPU typeTesla V100-SXM2-32GB
W&B CLI Version
0.13.9
Config

Config parameters are your model's inputs. Learn more

  • {} 13 keys
    • {} 4 keys
      • 0
      • false
      • 0
      • 0
    • {} 1 key
      • "fulltraining"
    • 15
    • {} 5 keys
      • "$HOME/.cache/huggingface/datasets"
      • "cuda"
      • {} 5 keys
        • "/mnt/work/glae/glae-results/fnc1/roberta/fulltraining/seed5"
        • 25
        • 25
        • 5
        • 128
        • {} 5 keys
        Summary

        Summary metrics are your model's outputs. Learn more

        • {} 7 keys
          • 97.812
          • 0.877
          • 0.866
          • 0.9611329222274168
          • 99.01960784313724
          • 0.9515012833538752
          • 0.04864222484840346
        Artifact Outputs

        This run produced these artifacts as outputs. Total: 1. Learn more

        Loading...