The PandaSet Dataset
PandaSet is a high-quality autonomous driving dataset that boasts the most number of annotated objects among 3d scene understanding datasets.
Created on September 14|Last edited on September 30
Comment
What Is the PandaSet Dataset?
PandaSet is a dataset which includes carefully planned routes and selected scenes that showcase complex urban driving scenarios, including steep hills, construction, dense traffic and pedestrians, and a variety of times of day and lighting conditions.
The PandaSet Dataset contains 3D Bounding box annotations for 28 object classes and a rich set of class attributes related to activity, visibility, location, pose. It also includes Point Cloud Segmentation with 37 semantic labels including for smoke, car exhaust, vegetation, and drivable surface.
What We're Covering About the PandaSet Dataset
What Is the PandaSet Dataset?What We're Covering About the PandaSet DatasetGeneral Info About the PandaSet DatasetSupported Tasks of the PandaSet Dataset LiDAR-Only 3D Object DetectionLiDAR-Camera Fusion 3D Object DetectionLiDAR Point Cloud SegmentationRecommended Reading
General Info About the PandaSet Dataset
- Leaderboard: Coming Soon!
Supported Tasks of the PandaSet Dataset
Here are the tasks that are supported by the PandaSet Dataset:
LiDAR-Only 3D Object Detection
LiDAR-Only 3D Object Detection is used when we are tasked to predict the volume of point cloud data point belonging to a particular object given only LiDAR sensor data. It also provides data for mechanical spinning and forward-facing LiDAR settings.
Models in the dataset can be evaluated on the Average Precision metric for the following three classes:
1: Pedestrian2: Vehicle3: Cyclist
LiDAR-Camera Fusion 3D Object Detection
LiDAR-Camera Fusion 3D Object Detection fuses and annotates LiDAR and camera sensor data, evaluating the task annotations exits for 19-classes, 3 classes from LiDAR-only detection, and one background class. Models can be evaluated using Average Precision.
LiDAR Point Cloud Segmentation
The PandaSet Dataset also provides the ground truth labels for LiDAR Point Cloud Segmentation. The authors of the dataset also establish a baseline for this task on the original 37 classes and merge them into 14 classes for evaluation of autonomous driving.
The commonly used IOU matrix is used for the evaluation of models using this dataset.
Recommended Reading
The Berkeley Deep Drive (BDD110K) Dataset
The BDD100K dataset is the largest and most diverse driving video dataset with 100,000 videos annotated for 10 different perception tasks in autonomous driving.
The nuScenes Dataset
nuScenes is a large-scale 3D perception dataset for Autonomous Driving provided by motional. The dataset has 3D bounding boxes for 1000 scenes.
The Semantic KITTI Dataset
Semantic-Kitti is a large semantic segmentation and scene understanding dataset developed for LiDAR-based autonomous driving. But what it is and what is it for?
The Waymo Open Dataset
The Waymo Open Dataset is a perception and motion planning video dataset for self-driving cars. It’s composed the perception and motion planning datasets.
The Woven Planet (Lyft) Level 5 Dataset
In this article, we'll be exploring the Woven Planet (Lyft) Level 5 dataset. We'll look at what it is as well as the autonomous vehicle tasks and techniques it supports
The Many Datasets of Autonomous Driving
Below we'll explore the datasets used to train autonomous driving systems to perform the various tasks required of them.
Add a comment
Iterate on AI agents and models faster. Try Weights & Biases today.