Skip to main content

BC04 (MMOneSphere) Naive PN++ Methods

Created on June 11|Last edited on June 14
Running the naive PN++ method, this will scale the translation but not rotation (but we also add a factor of 100X to that weight). We could also try without doing that and see if it helps this baseline?
Update: interesting, the eval curves have stabilized though they do seem to be trending downwards a tiny bit even at 500 epochs. However, the actual performance is stabilizing. The train and eval curves are on the same scale. The MSE is actually over 6D poses so this is a little hard to compare unfortunately. The values for training are definitely lower than those in evaluation.
Note again that this is using simulation data with scaling for the xyz but not for rotations, which makes the evaluation further confusing. Maybe we should only look at the position errors which should be valid? See those plots below as well, look at train/MSE_loss_pos and eval/MSE_loss_pos as that's the relevant comparison. The values seem to be hovering at around 0.03 to 0.04. What is the corresponding value in meters?
The 0.03-0.04 would reflect scaled values. In scooping we take the (deltax, deltay, deltaz) and multiply by 250. But in real we are actually doing the same thing with the same 250 constant, so I think we actually want to see if we can get 0.03-0.04 precision, which would be an order of magnitude smaller than the 0.3-0.4 we are seeing now?

Results


Naive PN++ to Vector (scaled targ)
5