Skip to main content

iNaturalist

Initial experiments training a small convnet on 2000 images in each of 10 classes.
Created on December 14|Last edited on March 21

010203040Step0.10.150.20.250.30.350.4
010203040Step0.10.20.30.4

Hyperparams of tiny network

Tune batch size, layer config, dropout

  • best batch size: 32 or 64
  • 5 conv layers and 2 fc layers works well (could explore fc configuration further)
  • a last conv layer of 128 seems best, with the prevalent architecture being 16-32-32-64-128.
  • an fc/dense size of 128 also seems best (64 and 256 don't learn much at all--interesting)
  • probably too early/tiny for applying dropout: val_acc continues to increase without it
2K examples
4
5K examples
7
Batch size
4



Dropout
5
Optimizer
4
Baseline
8



Pre-train on Class
8



Vary LR
12