Learning Curves for Wellplate
To understand the impact of training set size, I retrained the Wellplate KeypointRCNN model for different sizes.
I used an Adam optimizer and early stopping (patience = 2)
- lr = 0.0001
- optimizer = torch.optim.Adam(params, lr=lr, weight_decay=0.0005)
- no learning rate decay!
Just looking at the validation pixel error (average Euclidean distance between predictions and target), it seems that we should expect further improvement from getting more data.
Created on January 7|Last edited on January 7
Comment


Section 1
Add a comment