Physical Datasets (Naive Methods, Translation Data v03)
Focusing on training of baseline methods here. Just do translation data.
Created on June 12|Last edited on June 13
Comment
Overview
(06/12/2022) Now we have newer translation data: v03_physicalTranslations_pkls
Compare with earlier translation data: https://wandb.ai/mooey5775/mixed_media/reports/Physical-Datasets-Naive-Methods-Translation-Data-v02---VmlldzoyMTUyODQx where we were getting about 0.6 eval MSE loss (less than ideal).
When interpreting MSE, this comes directly from PyTorch's default MSE loss. We scale values by 250X for prediction stability, and then divide later by 250 at test time. So if we have this:
In [5]: a = torch.from_numpy(np.array([0,0,0]))In [6]: b = torch.from_numpy(np.array([0.5,0.5,0.5]))In [7]: loss(a,b)Out[7]: tensor(0.2500, dtype=torch.float64)In [8]: 0.5 / 250Out[8]: 0.002
Then I think that if the loss we see is 0.25, that would (if we assume errors in xyz are equal) mean a difference of 0.5 for the scaled variant which turns out to be just 0.002 meters (2 millimeters), right?
Results (Scaling Targets)
Naive, NoDataAug
1
Naive, DataAug 0.00001
1
Naive, DataAug 0.0001
1
Naive, DataAug 0.0004
1
Add a comment