Dchanda's group workspace
k5nu8k69390a-Baseline
What makes this group special?
Tags
k5nu8k69390a-fold-4
Notes
Tags
k5nu8k69390a
margin-loss
roberta-base
Author
State
Finished
Start time
November 9th, 2021 8:07:21 AM
Runtime
34m 44s
Tracked hours
34m 41s
Run path
dchanda/Jigsaw/2kocz0r7
OS
Linux-5.10.68+-x86_64-with-debian-buster-sid
Python version
3.7.10
Command
kaggle.ipynb
System Hardware
| CPU count | 2 |
| GPU count | 1 |
| GPU type | Tesla P100-PCIE-16GB |
W&B CLI Version
0.12.6
Job Type
Train
Config
Config parameters are your model's inputs. Learn more
- {} 19 keys▶
- "cuda:0"
- 3
- "k5nu8k69390a-Baseline"
- "k5nu8k69390a"
- 0.0001
- 0.5
- 128
- 0.000001
- "roberta-base"
- 1
- 5
- 1
- "CosineAnnealingLR"
- 2,021
- 500
- "PreTrainedTokenizerFast(name_or_path='roberta-base', vocab_size=50265, model_max_len=512, is_fast=True, padding_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'sep_token': '</s>', 'pad_token': '<pad>', 'cls_token': '<s>', 'mask_token': AddedToken("<mask>", rstrip=False, lstrip=True, single_word=False, normalized=False)})"
- 32
- 64
- 0.000001
Summary
Summary metrics are your model's outputs. Learn more
- {} 3 keys▶
- 0.34013118115132324
- 0.34680072549096447
- 0.3497176492848758
Artifact Outputs
This run produced these artifacts as outputs. Total: 1. Learn more
Type
Name
Consumer count
Loading...