Reports
Created by
Created On
Last edited
Modifying Keypoint RCNN loss functions
We tried running the KeypointRCNN model with different (hacky) loss functions, to emphasize issues where an individual point may be far off the ground truth.
- max_keypoint_loss: compute the loss individually for all 5 keypoints, then take the max
- hybrid_loss: average of max loss and average loss
- wellplate_v8_data: non-modified version using cross entropy.
0
2023-01-19
Testing different batch sizes
Testing out different batch sizes by doing gradient accumulation. The original model with batch_size = 2 is `learning_curve_338_early_stopping`. The other models are denoted by _{x}_{y} where x is the batch size and y the mini-batch size.
It looks like the performance is getting worse for larger batch sizes.
0
2023-01-10
Learning Curves for Wellplate
To understand the impact of training set size, I retrained the Wellplate KeypointRCNN model for different sizes.
I used an Adam optimizer and early stopping (patience = 2)
- lr = 0.0001
- optimizer = torch.optim.Adam(params, lr=lr, weight_decay=0.0005)
- no learning rate decay!
Just looking at the validation pixel error (average Euclidean distance between predictions and target), it seems that we should expect further improvement from getting more data.
0
2023-01-07