The Woven Planet (Lyft) Level 5 Dataset
In this article, we'll be exploring the Woven Planet (Lyft) Level 5 dataset. We'll look at what it is as well as the autonomous vehicle tasks and techniques it supports
Created on September 13|Last edited on October 3
Comment
What Is The Woven Planet Level 5 Dataset?
The Lyft Woven Planet Level 5 dataset is the largest autonomous-driving dataset for motion planning and prediction tasks. It contains over 1,000 hours of data collected by 20 self-driving cars and is annotated with semantic maps and high-definition aerial views.
There are 15,242 labeled elements in the dataset for autonomous driving-related machine learning tasks, such as motion forecasting, motion planning, and simulation.
What We're Covering About The Level 5 Dataset
What Is The Woven Planet Level 5 Dataset?What We're Covering About The Level 5 DatasetGeneral Info About The Level 5 DatesetDataset StructureSupported Tasks Of The Level 5 DatasetMotion PredictionRecommended Reading
General Info About The Level 5 Dateset
Dataset Structure
prediction-dataset/+- scenes/+- sample.zarr+- train.zarr+- train_full.zarr+- aerial_map/+- aerial_map.png+- semantic_map/+- semantic_map.pb+- meta.json
The dataset has three components
Scenes: 1,000 hours of traffic scenes collected by 20 self-driving vehicles driving over 26,000 km. There are over 170,000 scenes in the dataset with each scene lasting 25 seconds.
High-definition Semantic Maps: with 15,242 labeled elements, including 8,500 lane segments
High-resolution Satellite maps: at a resolution of 6cm per pixel covering over 74 sq. km
Supported Tasks Of The Level 5 Dataset
Here are the tasks supported by the Woven Planet Level 5 dataset:
Motion Prediction
In motion prediction, the task is to predict the expected future (x,y)- positions of an ego vehicle over a T=5 second-horizon for different traffic participants (vehicles, pedestrians and cyclists) in the scene given their current (and sometimes also historical) positions.
The following video by the creators of the Level 5 dataset explains the tasks and available models in further detail.
Motion Simulation
The released Woven Planer Level 5 dataset toolkit also contains a simulation environment to test the interaction between agents and the autonomous vehicle when they are both controlled by an ML policy.
Check out this short tutorial that defines the task of simulation for autonomous driving
Motion Planning
Motion planning is a core aspect of autonomous driving. A motion planner is an algorithm that tells the vehicle where to go. This dataset provides 2 ways to evaluate motion planning algorithms.
- Open-loop: In each frame, the model’s predictions are evaluated against the annotated ground truth.
- Closed-loop: The model’s predictions are unrolled, i.e. the AV is moved to the latest prediction before making the next prediction and the final state is evaluated.
Check out this report for further details related to the dataset, toolkit, and the motion prediction task.
Recommended Reading
The PandaSet Dataset
PandaSet is a high-quality autonomous driving dataset that boasts the most number of annotated objects among 3d scene understanding datasets.
The Berkeley Deep Drive (BDD110K) Dataset
The BDD100K dataset is the largest and most diverse driving video dataset with 100,000 videos annotated for 10 different perception tasks in autonomous driving.
Object Detection for Autonomous Vehicles (A Step-by-Step Guide)
Digging into object detection and perception for autonomous vehicles using YOLOv5 and Weights & Biases
The Semantic KITTI Dataset
Semantic-Kitti is a large semantic segmentation and scene understanding dataset developed for LiDAR-based autonomous driving. But what it is and what is it for?
Add a comment
Iterate on AI agents and models faster. Try Weights & Biases today.