W&B for Autonomous Vehicles
A collection of W&B Reports for AV use cases
Created on September 24|Last edited on September 24
Comment
Narrative Overview
In-depth examples of exploratory projects in this domain—semantic segmentation of driving scenes and 2D depth perception from video—with supporting charts, concrete examples, interactive visualizations, and analysis in W&B
Video to 3D: Depth Perception for Self-Driving Cars
Unsupervised learning of depth perception from dashboard cameras.
Semantic Segmentation: The View from the Driver's Seat
This article explores semantic segmentation for scene parsing on Berkeley Deep Drive 100K (BDD100K) including how to distinguish people from vehicles.
Visualization Tools
Tutorials on how to use custom visualizations for analyzing model predictions in the autonomous vehicle space
- 3D point clouds for 3D object detection and LIDAR data
- image masks for semantic segmentation and scene understanding in 2D
- bounding boxes for object detection in 2D
Exploring Bounding Boxes for Object Detection With Weights & Biases
In this article, we take a look at how to log and explore bounding boxes with Weights & Biases
Image Masks for Semantic Segmentation Using Weights & Biases
This article explains how to log and explore semantic segmentation masks, and how to interactively visualize models' predictions with Weights & Biases.
LIDAR Point Clouds of Driving Scenes
Visualize LIDAR point clouds from the Lyft dataset, annotate with 3D bounding boxes, and explore interactively!
Add a comment