Debugging Bounding Boxes with Interactive Visualizations

Nick Bardy

Debugging machine learning models often means analyzing patterns in thousands of model predictions. At Weights & Biases, we’re working with self-driving car teams who need to dynamically visualize the results of object detection models. It often isn’t obvious which set of boxes you want to visualize until your training run completes. With traditional tools, you’re stuck with the visualization decisions you made before training. If you set your cutoff for accuracy too low, you’ll end up with an overwhelming number of boxes, making it hard to see which ones matter. Too high, and you won’t get enough boxes, making it hard to see where your model goes wrong.

With our new interactive bounding box feature, we’re flipping that problem on its head. Instead of deciding how to filter your boxes before your run, simply log them all! Afterwards, use our rich set of filters and toggles to hide and show boxes based on dynamic criteria of your choice, exploring different aspects of your model with ease.

See a live example →

Documentation →

Join our mailing list to get the latest machine learning updates.