3D Objects in Driving Scenes
Visualize & precisely annotate 3D scenes in W&B
Created on October 11|Last edited on October 18
Comment
This 3D scene understanding demo on the Level 5 dataset showcases how to log, visualize, annotate, & explore 3D objects, scenes, and bounding boxes in W&B. We’d love any comments directly on this report and feedback on specific features which would help your team’s 3D point cloud visualizations—especially around camera viewpoints, bounding box orientation, and sequential scenes.
Labeling objects in a 10-scene sample
Label predictions on 10 sample scenes from the Level 5 Dataset and generate scene-level metrics for average IOU and max IOU across individual boxes.
- green: "truth", a correct/labeled ground truth bounding box
- yellow: a "match", a prediction close to a true box
- red: a "guess", a prediction far from a true box
The sample scenes below are sorted by lowest accuracy first, so we can focus on opportunities to improve the model.
Run: sample_10_scenes
1
Syntax: Log 3D bounding boxes to W&B
Prediction details
Static examples with annotations and possible next steps (toggle the arrow to left of the heading to expand).
Max IOU across sample
An IOU of 0.847 allows for almost perfect alignment of the front part of the truck and a noticeable difference in where the truck ends.

Lowest accuracy scene: trees are frequent false positives


Opportunities to adjust the overall scoring function?
We may consider weighting different types of 3D overlaps such that the score reflects the severity of the error in practice. For example, missing a significant portion of a larger object (leftmost box pair) may have greater consequences than capturing the full object at a slightly different angle (middle box pair). The rightmost box pair may be a valuable boundary case between these two scenarios.

Add a comment