Visualizing 3D Bounding Boxes
Visualize and share your 3D visualizations with just a few lines of code
Created on September 18|Last edited on September 18
Comment
When I started working with self driving datasets I grew frustrated with how challenging it was to understand the results of my work. A single 3D visualization can reveal insights that you’ll never find in a terminal full of metrics. I wanted to make it easy to render these visuals with any project and dataset.
Now with just a few lines of code you can log your data, see the 3D visualization and share it with a link.
wandb.log({"point_clouds_with_bb": wandb.Object3D({"type": "lidar/beta","points": points_rgb,"boxes": boxes})})
I tried this W&B logging on the new Lyft self driving dataset. Using their baseline clustering algorithm, I compare its results with the ground truth labels. immediately finding some interesting results.
(Red: Prediction, Green: Ground Truth)

The clustering algorithm struggled to make sense of car orientation. A lidar sensor only delivers data from a specific direction. This causes a high density of points near the camera, and missing points opposite the camera. The bounding boxes end up skewed toward the normal plane of the camera.

This reveals issues with not just the model, but the dataset itself. Above is one of the many examples of possibly mislabeled data.
Try it yourself— I'm excited to see what you come up with!
Add a comment
Tags: Computer Vision
Iterate on AI agents and models faster. Try Weights & Biases today.