iNaturalist
Initial experiments training a small convnet on 2000 images in each of 10 classes.
Created on December 14|Last edited on March 21
Comment
Hyperparams of tiny network
Tune batch size, layer config, dropout
- best batch size: 32 or 64
- 5 conv layers and 2 fc layers works well (could explore fc configuration further)
- a last conv layer of 128 seems best, with the prevalent architecture being 16-32-32-64-128.
- an fc/dense size of 128 also seems best (64 and 256 don't learn much at all--interesting)
- probably too early/tiny for applying dropout: val_acc continues to increase without it
4
5K examples
7
4
5
4
Baseline
8
Pre-train on Class
8
Vary LR
12
Add a comment