Pierrotlc's group workspace
PokeGAN - Finetuning
What makes this group special?
Tags
ethereal-darkness-249
Notes
Author
State
Finished
Start time
February 19th, 2022 1:35:59 AM
Runtime
2h 49m 1s
Tracked hours
2h 47m 25s
Run path
pierrotlc/AnimeStyleGAN/1gvyv7e3
OS
Linux-5.15.15-76051515-generic-x86_64-with-glibc2.10
Python version
3.8.5
Git repository
git clone git@github.com:Futurne/AnimeStyleGAN.git
Git state
git checkout -b "ethereal-darkness-249" c817affb1911f3e8491ce73c126da5cc32ec8baf
Command
launch_training.py ./smd/
System Hardware
| CPU count | 16 |
| GPU count | 1 |
| GPU type | NVIDIA GeForce RTX 3080 Laptop GPU |
W&B CLI Version
0.12.9
Group
PokeGAN - FinetuningConfig
Config parameters are your model's inputs. Learn more
- {} 33 keys▶
- 64
- [] 2 items▶
- 0.5
- 0.99
- [] 2 items▶
- 0.5
- 0.8
- "<torch.utils.data.dataloader.DataLoader object at 0x7f7de443d970>"
- "cuda"
- 64
- 32
- 0.3
- 500
- 0.1
- 0.1
- 0.0001
- 0.0005
- [] 0 items
- [] 0 items
- 512
- 12
- 10
- 3
- 5
- 4
- 10
- "Discriminator( (first_conv): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(3, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (blocks): ModuleList( (0): DiscriminatorBlock( (convs): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(12, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((12, 64, 64), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(12, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((12, 64, 64), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(12, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((12, 64, 64), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (3): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(12, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((12, 64, 64), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (4): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(12, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((12, 64, 64), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) ) (downsample): Conv2d(12, 24, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) ) (1): DiscriminatorBlock( (convs): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((24, 32, 32), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((24, 32, 32), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((24, 32, 32), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (3): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((24, 32, 32), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (4): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((24, 32, 32), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) ) (downsample): Conv2d(24, 48, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) ) (2): DiscriminatorBlock( (convs): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(48, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((48, 16, 16), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(48, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((48, 16, 16), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(48, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((48, 16, 16), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (3): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(48, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((48, 16, 16), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (4): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(48, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((48, 16, 16), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) ) (downsample): Conv2d(48, 96, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) ) (3): DiscriminatorBlock( (convs): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((96, 8, 8), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((96, 8, 8), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((96, 8, 8), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (3): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((96, 8, 8), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (4): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((96, 8, 8), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) ) (downsample): Conv2d(96, 192, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) ) (4): DiscriminatorBlock( (convs): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((192, 4, 4), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((192, 4, 4), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((192, 4, 4), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (3): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((192, 4, 4), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (4): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((192, 4, 4), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) ) (downsample): Conv2d(192, 384, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) ) (5): DiscriminatorBlock( (convs): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((384, 2, 2), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((384, 2, 2), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((384, 2, 2), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (3): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((384, 2, 2), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (4): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (2): LayerNorm((384, 2, 2), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) ) (downsample): Conv2d(384, 768, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) ) ) (classify): Sequential( (0): Conv2d(768, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (1): Flatten(start_dim=1, end_dim=-1) ) )"
- "StyleGAN( (mapping): MappingNetwork( (norm): LayerNorm((32,), eps=1e-05, elementwise_affine=True) (layers): ModuleList( (0): Sequential( (0): Linear(in_features=32, out_features=32, bias=True) (1): LayerNorm((32,), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Linear(in_features=32, out_features=32, bias=True) (1): LayerNorm((32,), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Linear(in_features=32, out_features=32, bias=True) (1): LayerNorm((32,), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) (3): Sequential( (0): Linear(in_features=32, out_features=32, bias=True) (1): LayerNorm((32,), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) ) (out): Linear(in_features=32, out_features=32, bias=True) ) (synthesis): SynthesisNetwork( (blocks): ModuleList( (0): SynthesisBlock( (layers): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((512, 2, 2), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((512, 2, 2), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((512, 2, 2), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) ) (ada_in): AdaIN() (A1): Linear(in_features=32, out_features=1024, bias=True) (A2): Linear(in_features=32, out_features=1024, bias=True) (B1): Conv2d(10, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (B2): Conv2d(10, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (1): SynthesisBlock( (upsample): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (conv): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((256, 4, 4), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (layers): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((256, 4, 4), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((256, 4, 4), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((256, 4, 4), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) ) (ada_in): AdaIN() (A1): Linear(in_features=32, out_features=512, bias=True) (A2): Linear(in_features=32, out_features=512, bias=True) (B1): Conv2d(10, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (B2): Conv2d(10, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (2): SynthesisBlock( (upsample): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (conv): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((128, 8, 8), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (layers): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((128, 8, 8), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((128, 8, 8), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((128, 8, 8), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) ) (ada_in): AdaIN() (A1): Linear(in_features=32, out_features=256, bias=True) (A2): Linear(in_features=32, out_features=256, bias=True) (B1): Conv2d(10, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (B2): Conv2d(10, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (3): SynthesisBlock( (upsample): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (conv): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((64, 16, 16), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (layers): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((64, 16, 16), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((64, 16, 16), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((64, 16, 16), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) ) (ada_in): AdaIN() (A1): Linear(in_features=32, out_features=128, bias=True) (A2): Linear(in_features=32, out_features=128, bias=True) (B1): Conv2d(10, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (B2): Conv2d(10, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (4): SynthesisBlock( (upsample): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (conv): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((32, 32, 32), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (layers): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((32, 32, 32), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((32, 32, 32), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((32, 32, 32), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) ) (ada_in): AdaIN() (A1): Linear(in_features=32, out_features=64, bias=True) (A2): Linear(in_features=32, out_features=64, bias=True) (B1): Conv2d(10, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (B2): Conv2d(10, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (5): SynthesisBlock( (upsample): ConvTranspose2d(32, 16, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (conv): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((16, 64, 64), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (layers): ModuleList( (0): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((16, 64, 64), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((16, 64, 64), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Dropout(p=0.3, inplace=False) (1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): LayerNorm((16, 64, 64), eps=1e-05, elementwise_affine=True) (3): LeakyReLU(negative_slope=0.01) ) ) (ada_in): AdaIN() (A1): Linear(in_features=32, out_features=32, bias=True) (A2): Linear(in_features=32, out_features=32, bias=True) (B1): Conv2d(10, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (B2): Conv2d(10, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) ) (to_rgb): Conv2d(16, 3, kernel_size=(1, 1), stride=(1, 1)) ) )"
- "Adam ( Parameter Group 0 amsgrad: False betas: (0.5, 0.99) eps: 1e-08 initial_lr: 0.0001 lr: 0.0001 weight_decay: 0 )"
- "Adam ( Parameter Group 0 amsgrad: False betas: (0.5, 0.8) eps: 1e-08 initial_lr: 0.0005 lr: 0.0005 weight_decay: 0 )"
- 0.9
- 0.9
- 0
- "<torch.optim.lr_scheduler.MultiStepLR object at 0x7f7e0def00a0>"
- "<torch.optim.lr_scheduler.MultiStepLR object at 0x7f7de440b4c0>"
- 0.5
- 0.1
Summary
Summary metrics are your model's outputs. Learn more
- {} 10 keys▶
- 0.016629763320088385
- 0.116948189586401
- 0.21726657301187516
- 0.9886264562606812
- 7.9802580833435055
- 7.9802580833435055
- {} 7 keys▶
- 0.9367802023887636
- 0.00000004481908675302
- 0.00000166975577258199
Artifact Outputs
This run produced these artifacts as outputs. Total: 3. Learn more
Type
Name
Consumer count
Loading...