Skip to main content

LIDAR Point Clouds of Driving Scenes

Visualize LIDAR point clouds from the Lyft dataset, annotate with 3D bounding boxes, and explore interactively!
Created on March 6|Last edited on September 24

Sample scenes with 3D bounding boxes

Log 6 samples from the Lyft dataset and show 3D bounding boxes around vehicles. The initial code runs in a Kaggle notebook here.

Interaction

Click on the arrows in the top right corner (visible on hovering over an image) to expand a scene and explore it.

You can use your mouse or the arrow keys to zoom, rotate, and move around in the scene.

Annotations

Green = ground truth in the Lyft dataset. Yellow = predictions made by this model.




Run set
6


Initial observations

  • class-specific color labeling would greatly help understand this data
  • model is most accurate close to center of LIDAR observations and misses more distant objects
  • model overfits car orientation to the subset of visible points; ground truth has larger boxes correctly aligned with the street
  • model seems to detect more cars in a consistent orientation to self as opposed to in an orthogonal orientation (e.g. cars on a crossing street at the intersection), though this could also be a function of distance away from the LIDAR
  • model mistakes trees and other large background objects for vehicles
  • model sometimes mistakes several visible pieces of a larger vehicle (truck or bus) for smaller separate vehicles

How to log point clouds

To log point clouds, pass in a Numpy array of points to wandb.Object3D:
wandb.log({"points": wandb.Object3D(np.array(points))})

Sample code

# Box Format in Lyft Dataset:
# Boxes: label: nan, score: nan, xyz: [2172.19, 989.22, -18.46],
# wlh: [2.08, 5.25, 1.97], rot axis: [0.00, 0.00, 1.00], ang(degrees): -31.70, ang(rad): -0.55,
# vel: nan, nan, nan, name: car, token: 0504e4480bc4e9aad8c6bd40f1f2311d57379f978201338bbf369f37f1e7b6d2
# print("---------\n\nBoxes Shape: ", act_boxes.shape)
boxes = []
# Fetch points (with associated colors) for logging in W&B later
points_rgb = np.array([[p[0], p[1], p[2], c[0], c[1], c[2]] for p, c in zip(points, rgb)])
# Loop through boxes, fetch xyz coords, width, length, height, axis and rotation
for i, box in enumerate(true_boxes):
points = box.corners().tolist()
boxes_true_label = {
"corners": list(zip(*points)),
# optionally customize each label
#"label": "ground truth",
"color": [0,255,0],
}
boxes.append(boxes_true_label)
for i, box in enumerate(predicted_box_corners):
boxes_guess_label = {
"corners": list(zip(*box.T.tolist())),
#"label": "prediction",
"color": [255,255,0],
}
boxes.append(boxes_guess_label)
boxes = np.array(boxes)
# Log points and boxes in W&B
wandb.log(
{
"3d point cloud": wandb.Object3D(
{
"type": "lidar/beta",
"points": points_rgb,
"boxes": boxes
}
)
})