Urakiny's workspace
Runs
10
Name
10 visualized
State
Notes
User
Tags
Created
Runtime
Sweep
eval_loss
train_loss
architecture
batch_size
dataset
epochs
learning_rate
step
traing_loss
Finished
Prompt tuning for causal language modeling
urakiny
47m 41s
-
13.88324
-
Transformers
8
twitter_complaints
50
0.03
6
0.45261
Finished
Prompt tuning for causal language modeling
urakiny
47m 57s
-
4824.45996
-
Transformers
8
twitter_complaints
50
0.03
6
0.25174
Finished
Prompt tuning for causal language modeling
urakiny
47m 42s
-
5850.42236
-
Transformers
8
twitter_complaints
50
0.03
424
0.35611
Finished
Prompt tuning for causal language modeling
urakiny
5m 27s
-
3743.28613
39.16218
Transformers
8
twitter_complaints
5
0.03
-
-
Finished
Prompt tuning for causal language modeling
urakiny
27m 42s
-
3924.56665
77.18574
-
-
-
-
-
-
-
Finished
Prompt tuning for causal language modeling
urakiny
2m 46s
-
4094.93481
269.62692
-
-
-
-
-
-
-
Finished
Prompt tuning for causal language modeling
urakiny
9m 16s
-
-
-
-
-
-
-
-
-
-
Finished
Prompt tuning for causal language modeling
urakiny
10m 55s
-
-
-
-
-
-
-
-
-
-
Finished
Prompt tuning for causal language modeling
urakiny
6m 19s
-
818.60529
289.93845
-
-
-
-
-
-
-
Finished
Prompt tuning for causal language modeling
urakiny
55s
-
-
-
-
-
-
-
-
-
-
1-10
of 10