Arampacha's workspace
Runs
356
Name
3 visualized
accuracy
valid_loss
0.93927
0.16645
0.94059
0.16807
0.94217
0.17787
0.94165
0.19684
0.94128
0.19816
0.96028
0.11572
0.94164
0.1565
0.93267
0.17693
0.93692
0.168
0.93965
0.16578
0.94139
0.16697
0.94193
0.17853
0.95764
0.12217
-
-
-
-
0.89951
0.31185
0.90101
0.26559
0.89684
0.32426
0.89331
0.32395
0.94272
0.15363
State
Notes
User
Tags
Created
Runtime
Sweep
ALUMCallback.alpha
ALUMCallback.m
ALUMCallback.start_iter
GradientAccumulation.n_acc
HF_BaseModelCallback
HF_QstAndAnsModelCallback
Learner._name
Learner.loss_func._name
Learner.loss_func.axis
Learner.loss_func.flatten
Learner.loss_func.floatify
Learner.loss_func.is_2d
Learner.loss_func.loss_funcs
Learner.loss_func.weights
Learner.lr
Learner.metrics
Learner.model_dir
Learner.moms
Learner.opt_func
Learner.path
Learner.splitter
Learner.train_bn
Learner.wd_bn_bias
MixedPrecision
ParamScheduler
ProgressCallback
Recorder.add_time
Recorder.train_metrics
Recorder.valid_metrics
TrainEvalCallback
WandbCallback.log
WandbCallback.log_dataset
WandbCallback.log_model
WandbCallback.log_preds
WandbCallback.n_preds
WandbCallback.reorder
WandbCallback.seed
batch per epoch
batch size
dataset.tfms
device
dls.after_batch
dls.after_item
dls.before_batch
Finished
morgan
1h 54m 59s
-
1
Embeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
-
8
true
-
["<fastai.learner.Learner object at 0x7f2090063f50>","<fastai.learner.Learner object at 0x7f20b8f46690>","<fastai.learner.Learner object at 0x7f210c8e8a50>"]
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
blurr.modeling.core.hf_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
3125
8
[Pipeline: read_text, Pipeline: parent_label -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}]
cuda
Pipeline: HF_AfterBatchTransform
Pipeline: ToTensor
Pipeline: HF_BeforeBatchTransform
Finished
morgan
1h 54m 59s
-
1
Embeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
-
8
true
-
["<fastai.learner.Learner object at 0x7f20d8315910>","<fastai.learner.Learner object at 0x7f210c611b50>","<fastai.learner.Learner object at 0x7f210caaf990>"]
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
blurr.modeling.core.hf_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
3125
8
[Pipeline: read_text, Pipeline: parent_label -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}]
cuda
Pipeline: HF_AfterBatchTransform
Pipeline: ToTensor
Pipeline: HF_BeforeBatchTransform
Finished
morgan
1h 55m 4s
-
1
Embeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
-
8
true
-
["<fastai.learner.Learner object at 0x7f20d85b0310>","<fastai.learner.Learner object at 0x7f210c9061d0>","<fastai.learner.Learner object at 0x7f210cbce150>"]
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
blurr.modeling.core.hf_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
3125
8
[Pipeline: read_text, Pipeline: parent_label -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}]
cuda
Pipeline: HF_AfterBatchTransform
Pipeline: ToTensor
Pipeline: HF_BeforeBatchTransform
Finished
morgan
1h 54m 57s
-
1
Embeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
-
8
true
-
["<fastai.learner.Learner object at 0x7f20b8f721d0>","<fastai.learner.Learner object at 0x7f210c88f910>","<fastai.learner.Learner object at 0x7f210cab8190>"]
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
blurr.modeling.core.hf_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
3125
8
[Pipeline: read_text, Pipeline: parent_label -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}]
cuda
Pipeline: HF_AfterBatchTransform
Pipeline: ToTensor
Pipeline: HF_BeforeBatchTransform
Finished
morgan
1h 54m 52s
-
1
Embeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
-
8
true
-
["<fastai.learner.Learner object at 0x7f210c924b50>","<fastai.learner.Learner object at 0x7f210cab8850>","<fastai.learner.Learner object at 0x7f210cb7ea90>"]
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
blurr.modeling.core.hf_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
3125
8
[Pipeline: read_text, Pipeline: parent_label -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}]
cuda
Pipeline: HF_AfterBatchTransform
Pipeline: ToTensor
Pipeline: HF_BeforeBatchTransform
Finished
arampacha
4h 53m 36s
-
0.7
DebertaEmbeddings(
(word_embeddings): Embedding(50265, 768, padding_idx=0)
(LayerNorm): DebertaLayerNorm()
(dropout): StableDropout()
)
-
16
-
-
<fasthugs.learner.TransLearner object at 0x7fa50b9137d0>
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
fasthugs.learner.default_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
6250
4
[Pipeline: read_text, Pipeline: parent_label -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}]
cuda
Pipeline: Undict
Pipeline: ToTensor
Pipeline: TokBatchTransform
Finished
morgan
1h 54m 43s
-
1
Embeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
-
8
true
-
["<fastai.learner.Learner object at 0x7fe722fb1b50>","<fastai.learner.Learner object at 0x7fe800a93750>","<fastai.learner.Learner object at 0x7fe800aef0d0>"]
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
blurr.modeling.core.hf_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
3125
8
[Pipeline: read_text, Pipeline: parent_label -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}]
cuda
Pipeline: HF_AfterBatchTransform
Pipeline: ToTensor
Pipeline: HF_BeforeBatchTransform
Finished
morgan
1h 54m 53s
-
1
Embeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
-
8
true
-
["<fastai.learner.Learner object at 0x7f7720e5b990>","<fastai.learner.Learner object at 0x7f772c366b10>","<fastai.learner.Learner object at 0x7f772c6394d0>"]
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
blurr.modeling.core.hf_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
3125
8
[Pipeline: read_text, Pipeline: parent_label -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}]
cuda
Pipeline: HF_AfterBatchTransform
Pipeline: ToTensor
Pipeline: HF_BeforeBatchTransform
Finished
morgan
1h 54m 54s
-
1
Embeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
-
8
true
-
["<fastai.learner.Learner object at 0x7f76fb4cd910>","<fastai.learner.Learner object at 0x7f772c36edd0>","<fastai.learner.Learner object at 0x7f772c6add10>"]
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
blurr.modeling.core.hf_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
3125
8
[Pipeline: read_text, Pipeline: parent_label -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}]
cuda
Pipeline: HF_AfterBatchTransform
Pipeline: ToTensor
Pipeline: HF_BeforeBatchTransform
Finished
morgan
1h 55m 1s
-
1
Embeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
-
8
true
-
["<fastai.learner.Learner object at 0x7f7721e1ded0>","<fastai.learner.Learner object at 0x7f772c473850>","<fastai.learner.Learner object at 0x7f772fd10490>"]
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
blurr.modeling.core.hf_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
3125
8
[Pipeline: read_text, Pipeline: parent_label -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}]
cuda
Pipeline: HF_AfterBatchTransform
Pipeline: ToTensor
Pipeline: HF_BeforeBatchTransform
Finished
morgan
1h 54m 22s
-
1
Embeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
-
8
true
-
["<fastai.learner.Learner object at 0x7f77225d4a10>","<fastai.learner.Learner object at 0x7f772bf81310>","<fastai.learner.Learner object at 0x7f772c2e44d0>"]
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
blurr.modeling.core.hf_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
3125
8
[Pipeline: read_text, Pipeline: parent_label -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}]
cuda
Pipeline: HF_AfterBatchTransform
Pipeline: ToTensor
Pipeline: HF_BeforeBatchTransform
Finished
morgan
1h 54m 5s
-
1
Embeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
-
8
true
-
["<fastai.learner.Learner object at 0x7f772c5631d0>","<fastai.learner.Learner object at 0x7f772c635850>","<fastai.learner.Learner object at 0x7f772c66d710>"]
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
blurr.modeling.core.hf_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
3125
8
[Pipeline: read_text, Pipeline: parent_label -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}]
cuda
Pipeline: HF_AfterBatchTransform
Pipeline: ToTensor
Pipeline: HF_BeforeBatchTransform
Crashed
arampacha
6h 36m 54s
-
0.7
RobertaEmbeddings(
(word_embeddings): Embedding(50265, 768, padding_idx=1)
(position_embeddings): Embedding(514, 768, padding_idx=1)
(token_type_embeddings): Embedding(1, 768)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
-
-
true
-
<fastai.learner.Learner object at 0x7f057c295fd0>
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
blurr.modeling.core.hf_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
9118
8
[Pipeline: ItemGetter -> read_text, Pipeline: ItemGetter -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}]
cuda
Pipeline: HF_AfterBatchTransform
Pipeline: ToTensor
Pipeline: HF_BeforeBatchTransform
Finished
arampacha
8h 17m 20s
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Finished
arampacha
12h 45m 8s
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Finished
arampacha
5h 27m 31s
-
0.225
Embeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
-
-
true
-
["<fastai.learner.Learner object at 0x7f966297d3a0>","<fastai.learner.Learner object at 0x7fe97fde8460>","<fastai.learner.Learner object at 0x7fe9a019e310>","<fastai.learner.Learner object at 0x7fe9a351b5e0>"]
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
blurr.modeling.core.hf_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
8583
64
[Pipeline: get_x, Pipeline: ItemGetter -> Categorize -- {'vocab': [0, 1, 2], 'sort': True, 'add_na': False}]
cuda
Pipeline: HF_AfterBatchTransform
Pipeline: ToTensor
Pipeline: HF_BeforeBatchTransform
Crashed
morgan
10d 7m 5s
8.44042
Embeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
-
8
true
-
["<fastai.learner.Learner object at 0x7f1f2419a990>","<fastai.learner.Learner object at 0x7f1f2f7fdbd0>","<fastai.learner.Learner object at 0x7f1f448228d0>","<fastai.learner.Learner object at 0x7f1f44bca890>","<fastai.learner.Learner object at 0x7f1f44c34c90>","<fastai.learner.Learner object at 0x7f1f459c7490>","<fastai.learner.Learner object at 0x7f1f45c081d0>","<fastai.learner.Learner object at 0x7f1f45e73750>","<fastai.learner.Learner object at 0x7f201497fad0>","<fastai.learner.Learner object at 0x7fbc340c9990>"]
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
blurr.modeling.core.hf_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
3125
8
[Pipeline: read_text, Pipeline: parent_label -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}]
cuda
Pipeline: HF_AfterBatchTransform
Pipeline: ToTensor
Pipeline: HF_BeforeBatchTransform
Finished
arampacha
23h 17m 48s
-
0.5
Embeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
-
-
true
-
["<fastai.learner.Learner object at 0x7f00f81b0e20>","<fastai.learner.Learner object at 0x7f796fcf0438>","<fastai.learner.Learner object at 0x7fa594ad7438>"]
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
blurr.modeling.core.hf_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
14305.66667
42.66667
[Pipeline: get_x, Pipeline: ItemGetter -> Categorize -- {'vocab': [0, 1, 2], 'sort': True, 'add_na': False}]
cuda
Pipeline: HF_AfterBatchTransform
Pipeline: ToTensor
Pipeline: HF_BeforeBatchTransform
Crashed
arampacha
3h 54m 21s
-
1
Embeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
-
-
true
-
<fastai.learner.Learner object at 0x7f4a71708ad0>
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
blurr.modeling.core.hf_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
34335
16
[Pipeline: get_x, Pipeline: ItemGetter -> Categorize -- {'vocab': [0, 1, 2], 'sort': True, 'add_na': False}]
cuda
Pipeline: HF_AfterBatchTransform
Pipeline: ToTensor
Pipeline: HF_BeforeBatchTransform
Finished
arampacha
1d 2h 31m 39s
-
1
Embeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
-
8
true
-
["<fastai.learner.Learner object at 0x7f18e83f7ad0>","<fastai.learner.Learner object at 0x7fb59c602550>"]
FlattenedLoss of CrossEntropyLoss()
-1
true
false
true
-
-
0.001
fastai.metrics.accuracy
models
0.91667
fastai.optimizer.RAdam
.
blurr.modeling.core.hf_splitter
true
false
true
true
true
true
false
true
true
gradients
false
false
false
36
true
12345
6250
4
[Pipeline: read_text, Pipeline: parent_label -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}]
cuda
Pipeline: HF_AfterBatchTransform
Pipeline: ToTensor
Pipeline: HF_BeforeBatchTransform
1-20
of 25