A-sh0ts's workspace
Runs
20
Name
20 visualized
State
Notes
User
Tags
Created
Runtime
Sweep
format_version
model_params.future_num_frames
model_params.history_num_frames
model_params.model_architecture
model_params.render_ego_history
model_params.step_time
raster_params.dataset_meta_key
raster_params.disable_traffic_light_faces
raster_params.ego_center
raster_params.filter_agents_threshold
raster_params.map_type
raster_params.pixel_size
raster_params.raster_size
raster_params.satellite_map_key
raster_params.semantic_map_key
raster_params.set_origin_to_bottom
val_data_loader.batch_size
val_data_loader.key
val_data_loader.num_workers
val_data_loader.shuffle
Finished
This run uses the ./setup_data.sh file to
(1) Download a sample of scene data
(2) A semantic map to overlay scene data
(3) An aerial map for a higher detailed look of the scene data
(4) The collection of useful configurations used to run the experiments associated with this data
Read more at https://woven-planet.github.io/l5kit/
a-sh0ts
data
upload
7s
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Finished
(1) we iterate over the frames to get a scatter plot of the AV locations
(2) agents types distribution
We will use two classes from the dataset package for this example. Both of them can be iterated and return multi-channel images from the rasterizer along with future trajectories offsets and other information.
* EgoDataset: this dataset iterates over the AV annotations
* AgentDataset: this dataset iterates over other agents annotations
(3) Visualizing the AV
We can convert the AgentDataset's target_position (displacements in meters in agent coordinates) into pixel coordinates in the image space, and call our utility function draw_trajectory (note that you can use this function for the predicted trajectories, as well)
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(4) Visualizing the Agent
We can just replace the EgoDataset with an AgentDataset. Now we're iterating over agents and not the AV anymore
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(5) Visualizing the Scene
Both EgoDataset and AgentDataset provide 2 methods for getting interesting indices:
get_frame_indices returns the indices for a given frame. For the EgoDataset this matches a single observation, while more than one index could be available for the AgentDataset, as that given frame may contain more than one valid agent
get_scene_indices returns indices for a given scene. For both datasets, these might return more than one index
(a) Matplotlib
(i) Bounding boxes on Semantic map
(ii) Bounding boxes on Aerial map
(b) Bokeh
a-sh0ts
data
download
visualize
5m 50s
-
4
50
0
resnet50
true
0.1
meta.json
false
[0.25,0.5]
0.5
py_semantic
[0.5,0.5]
[224,224]
aerial_map/aerial_map.png
semantic_map/semantic_map.pb
true
12
scenes/sample.zarr
16
false
Crashed
(1) we iterate over the frames to get a scatter plot of the AV locations
(2) agents types distribution
We will use two classes from the dataset package for this example. Both of them can be iterated and return multi-channel images from the rasterizer along with future trajectories offsets and other information.
* EgoDataset: this dataset iterates over the AV annotations
* AgentDataset: this dataset iterates over other agents annotations
(3) Visualizing the AV
We can convert the AgentDataset's target_position (displacements in meters in agent coordinates) into pixel coordinates in the image space, and call our utility function draw_trajectory (note that you can use this function for the predicted trajectories, as well)
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(4) Visualizing the Agent
We can just replace the EgoDataset with an AgentDataset. Now we're iterating over agents and not the AV anymore
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(5) Visualizing the Scene
Both EgoDataset and AgentDataset provide 2 methods for getting interesting indices:
get_frame_indices returns the indices for a given frame. For the EgoDataset this matches a single observation, while more than one index could be available for the AgentDataset, as that given frame may contain more than one valid agent
get_scene_indices returns indices for a given scene. For both datasets, these might return more than one index
(a) Matplotlib
(i) Bounding boxes on Semantic map
(ii) Bounding boxes on Aerial map
(b) Bokeh
a-sh0ts
data
download
visualize
5m
-
4
50
0
resnet50
true
0.1
meta.json
false
[0.25,0.5]
0.5
py_semantic
[0.5,0.5]
[224,224]
aerial_map/aerial_map.png
semantic_map/semantic_map.pb
true
12
scenes/sample.zarr
16
false
Finished
(1) we iterate over the frames to get a scatter plot of the AV locations
(2) agents types distribution
We will use two classes from the dataset package for this example. Both of them can be iterated and return multi-channel images from the rasterizer along with future trajectories offsets and other information.
* EgoDataset: this dataset iterates over the AV annotations
* AgentDataset: this dataset iterates over other agents annotations
(3) Visualizing the AV
We can convert the AgentDataset's target_position (displacements in meters in agent coordinates) into pixel coordinates in the image space, and call our utility function draw_trajectory (note that you can use this function for the predicted trajectories, as well)
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(4) Visualizing the Agent
We can just replace the EgoDataset with an AgentDataset. Now we're iterating over agents and not the AV anymore
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(5) Visualizing the Scene
Both EgoDataset and AgentDataset provide 2 methods for getting interesting indices:
get_frame_indices returns the indices for a given frame. For the EgoDataset this matches a single observation, while more than one index could be available for the AgentDataset, as that given frame may contain more than one valid agent
get_scene_indices returns indices for a given scene. For both datasets, these might return more than one index
(a) Matplotlib
(i) Bounding boxes on Semantic map
(ii) Bounding boxes on Aerial map
(b) Bokeh
a-sh0ts
data
download
visualize
5m 31s
-
4
50
0
resnet50
true
0.1
meta.json
false
[0.25,0.5]
0.5
py_semantic
[0.5,0.5]
[224,224]
aerial_map/aerial_map.png
semantic_map/semantic_map.pb
true
12
scenes/sample.zarr
16
false
Finished
(1) we iterate over the frames to get a scatter plot of the AV locations
(2) agents types distribution
We will use two classes from the dataset package for this example. Both of them can be iterated and return multi-channel images from the rasterizer along with future trajectories offsets and other information.
* EgoDataset: this dataset iterates over the AV annotations
* AgentDataset: this dataset iterates over other agents annotations
(3) Visualizing the AV
We can convert the AgentDataset's target_position (displacements in meters in agent coordinates) into pixel coordinates in the image space, and call our utility function draw_trajectory (note that you can use this function for the predicted trajectories, as well)
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(4) Visualizing the Agent
We can just replace the EgoDataset with an AgentDataset. Now we're iterating over agents and not the AV anymore
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(5) Visualizing the Scene
Both EgoDataset and AgentDataset provide 2 methods for getting interesting indices:
get_frame_indices returns the indices for a given frame. For the EgoDataset this matches a single observation, while more than one index could be available for the AgentDataset, as that given frame may contain more than one valid agent
get_scene_indices returns indices for a given scene. For both datasets, these might return more than one index
(a) Matplotlib
(i) Bounding boxes on Semantic map
(ii) Bounding boxes on Aerial map
(b) Bokeh
a-sh0ts
data
download
visualize
13m 13s
-
4
50
0
resnet50
true
0.1
meta.json
false
[0.25,0.5]
0.5
py_semantic
[0.5,0.5]
[224,224]
aerial_map/aerial_map.png
semantic_map/semantic_map.pb
true
12
scenes/sample.zarr
16
false
Finished
(1) we iterate over the frames to get a scatter plot of the AV locations
(2) agents types distribution
We will use two classes from the dataset package for this example. Both of them can be iterated and return multi-channel images from the rasterizer along with future trajectories offsets and other information.
* EgoDataset: this dataset iterates over the AV annotations
* AgentDataset: this dataset iterates over other agents annotations
(3) Visualizing the AV
We can convert the AgentDataset's target_position (displacements in meters in agent coordinates) into pixel coordinates in the image space, and call our utility function draw_trajectory (note that you can use this function for the predicted trajectories, as well)
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(4) Visualizing the Agent
We can just replace the EgoDataset with an AgentDataset. Now we're iterating over agents and not the AV anymore
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(5) Visualizing the Scene
Both EgoDataset and AgentDataset provide 2 methods for getting interesting indices:
get_frame_indices returns the indices for a given frame. For the EgoDataset this matches a single observation, while more than one index could be available for the AgentDataset, as that given frame may contain more than one valid agent
get_scene_indices returns indices for a given scene. For both datasets, these might return more than one index
(a) Matplotlib
(i) Bounding boxes on Semantic map
(ii) Bounding boxes on Aerial map
(b) Bokeh
a-sh0ts
data
download
visualize
1h 14m 52s
-
4
50
0
resnet50
true
0.1
meta.json
false
[0.25,0.5]
0.5
py_semantic
[0.5,0.5]
[224,224]
aerial_map/aerial_map.png
semantic_map/semantic_map.pb
true
12
scenes/sample.zarr
16
false
Finished
(1) we iterate over the frames to get a scatter plot of the AV locations
(2) agents types distribution
We will use two classes from the dataset package for this example. Both of them can be iterated and return multi-channel images from the rasterizer along with future trajectories offsets and other information.
* EgoDataset: this dataset iterates over the AV annotations
* AgentDataset: this dataset iterates over other agents annotations
(3) Visualizing the AV
We can convert the AgentDataset's target_position (displacements in meters in agent coordinates) into pixel coordinates in the image space, and call our utility function draw_trajectory (note that you can use this function for the predicted trajectories, as well)
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(4) Visualizing the Agent
We can just replace the EgoDataset with an AgentDataset. Now we're iterating over agents and not the AV anymore
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(5) Visualizing the Scene
Both EgoDataset and AgentDataset provide 2 methods for getting interesting indices:
get_frame_indices returns the indices for a given frame. For the EgoDataset this matches a single observation, while more than one index could be available for the AgentDataset, as that given frame may contain more than one valid agent
get_scene_indices returns indices for a given scene. For both datasets, these might return more than one index
(a) Matplotlib
(i) Bounding boxes on Semantic map
(ii) Bounding boxes on Aerial map
(b) Bokeh
a-sh0ts
data
download
visualize
3m 40s
-
4
50
0
resnet50
true
0.1
meta.json
false
[0.25,0.5]
0.5
py_semantic
[0.5,0.5]
[224,224]
aerial_map/aerial_map.png
semantic_map/semantic_map.pb
true
12
scenes/sample.zarr
16
false
Finished
(1) we iterate over the frames to get a scatter plot of the AV locations
(2) agents types distribution
We will use two classes from the dataset package for this example. Both of them can be iterated and return multi-channel images from the rasterizer along with future trajectories offsets and other information.
* EgoDataset: this dataset iterates over the AV annotations
* AgentDataset: this dataset iterates over other agents annotations
(3) Visualizing the AV
We can convert the AgentDataset's target_position (displacements in meters in agent coordinates) into pixel coordinates in the image space, and call our utility function draw_trajectory (note that you can use this function for the predicted trajectories, as well)
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(4) Visualizing the Agent
We can just replace the EgoDataset with an AgentDataset. Now we're iterating over agents and not the AV anymore
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(5) Visualizing the Scene
Both EgoDataset and AgentDataset provide 2 methods for getting interesting indices:
get_frame_indices returns the indices for a given frame. For the EgoDataset this matches a single observation, while more than one index could be available for the AgentDataset, as that given frame may contain more than one valid agent
get_scene_indices returns indices for a given scene. For both datasets, these might return more than one index
(a) Matplotlib
(i) Bounding boxes on Semantic map
(ii) Bounding boxes on Aerial map
(b) Bokeh
a-sh0ts
data
download
visualize
11m 40s
-
4
50
0
resnet50
true
0.1
meta.json
false
[0.25,0.5]
0.5
py_semantic
[0.5,0.5]
[224,224]
aerial_map/aerial_map.png
semantic_map/semantic_map.pb
true
12
scenes/sample.zarr
16
false
Finished
(1) we iterate over the frames to get a scatter plot of the AV locations
(2) agents types distribution
We will use two classes from the dataset package for this example. Both of them can be iterated and return multi-channel images from the rasterizer along with future trajectories offsets and other information.
* EgoDataset: this dataset iterates over the AV annotations
* AgentDataset: this dataset iterates over other agents annotations
(3) Visualizing the AV
We can convert the AgentDataset's target_position (displacements in meters in agent coordinates) into pixel coordinates in the image space, and call our utility function draw_trajectory (note that you can use this function for the predicted trajectories, as well)
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(4) Visualizing the Agent
We can just replace the EgoDataset with an AgentDataset. Now we're iterating over agents and not the AV anymore
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(5) Visualizing the Scene
Both EgoDataset and AgentDataset provide 2 methods for getting interesting indices:
get_frame_indices returns the indices for a given frame. For the EgoDataset this matches a single observation, while more than one index could be available for the AgentDataset, as that given frame may contain more than one valid agent
get_scene_indices returns indices for a given scene. For both datasets, these might return more than one index
(a) Matplotlib
(i) Bounding boxes on Semantic map
(ii) Bounding boxes on Aerial map
(b) Bokeh
a-sh0ts
data
download
visualize
8m 54s
-
4
50
0
resnet50
true
0.1
meta.json
false
[0.25,0.5]
0.5
py_semantic
[0.5,0.5]
[224,224]
aerial_map/aerial_map.png
semantic_map/semantic_map.pb
true
12
scenes/sample.zarr
16
false
Finished
(1) we iterate over the frames to get a scatter plot of the AV locations
(2) agents types distribution
We will use two classes from the dataset package for this example. Both of them can be iterated and return multi-channel images from the rasterizer along with future trajectories offsets and other information.
* EgoDataset: this dataset iterates over the AV annotations
* AgentDataset: this dataset iterates over other agents annotations
(3) Visualizing the AV
We can convert the AgentDataset's target_position (displacements in meters in agent coordinates) into pixel coordinates in the image space, and call our utility function draw_trajectory (note that you can use this function for the predicted trajectories, as well)
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(4) Visualizing the Agent
We can just replace the EgoDataset with an AgentDataset. Now we're iterating over agents and not the AV anymore
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(5) Visualizing the Scene
Both EgoDataset and AgentDataset provide 2 methods for getting interesting indices:
get_frame_indices returns the indices for a given frame. For the EgoDataset this matches a single observation, while more than one index could be available for the AgentDataset, as that given frame may contain more than one valid agent
get_scene_indices returns indices for a given scene. For both datasets, these might return more than one index
(a) Matplotlib
(i) Bounding boxes on Semantic map
(ii) Bounding boxes on Aerial map
(b) Bokeh
a-sh0ts
data
download
visualize
10m
-
4
50
0
resnet50
true
0.1
meta.json
false
[0.25,0.5]
0.5
py_semantic
[0.5,0.5]
[224,224]
aerial_map/aerial_map.png
semantic_map/semantic_map.pb
true
12
scenes/sample.zarr
16
false
Finished
This run uses the ./setup_data.sh file to
(1) Download a sample of scene data
(2) A semantic map to overlay scene data
(3) An aerial map for a higher detailed look of the scene data
(4) The collection of useful configurations used to run the experiments associated with this data
Read more at https://woven-planet.github.io/l5kit/
a-sh0ts
data
upload
7s
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Finished
This run uses the ./setup_data.sh file to
(1) Download a sample of scene data
(2) A semantic map to overlay scene data
(3) An aerial map for a higher detailed look of the scene data
(4) The collection of useful configurations used to run the experiments associated with this data
Read more at https://woven-planet.github.io/l5kit/
a-sh0ts
data
upload
5s
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Finished
(1) we iterate over the frames to get a scatter plot of the AV locations
(2) agents types distribution
We will use two classes from the dataset package for this example. Both of them can be iterated and return multi-channel images from the rasterizer along with future trajectories offsets and other information.
* EgoDataset: this dataset iterates over the AV annotations
* AgentDataset: this dataset iterates over other agents annotations
(3) Visualizing the AV
We can convert the AgentDataset's target_position (displacements in meters in agent coordinates) into pixel coordinates in the image space, and call our utility function draw_trajectory (note that you can use this function for the predicted trajectories, as well)
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(4) Visualizing the Agent
We can just replace the EgoDataset with an AgentDataset. Now we're iterating over agents and not the AV anymore
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(5) Visualizing the Scene
Both EgoDataset and AgentDataset provide 2 methods for getting interesting indices:
get_frame_indices returns the indices for a given frame. For the EgoDataset this matches a single observation, while more than one index could be available for the AgentDataset, as that given frame may contain more than one valid agent
get_scene_indices returns indices for a given scene. For both datasets, these might return more than one index
(a) Matplotlib
(i) Bounding boxes on Semantic map
(ii) Bounding boxes on Aerial map
(b) Bokeh
a-sh0ts
data
download
visualize
9m 32s
-
4
50
0
resnet50
true
0.1
meta.json
false
[0.25,0.5]
0.5
py_semantic
[0.5,0.5]
[224,224]
aerial_map/aerial_map.png
semantic_map/semantic_map.pb
true
12
scenes/sample.zarr
16
false
Finished
This run uses the ./setup_data.sh file to
(1) Download a sample of scene data
(2) A semantic map to overlay scene data
(3) An aerial map for a higher detailed look of the scene data
(4) The collection of useful configurations used to run the experiments associated with this data
Read more at https://woven-planet.github.io/l5kit/
a-sh0ts
data
upload
42s
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Finished
This run uses the ./setup_data.sh file to
(1) Download a sample of scene data
(2) A semantic map to overlay scene data
(3) An aerial map for a higher detailed look of the scene data
(4) The collection of useful configurations used to run the experiments associated with this data
Read more at https://woven-planet.github.io/l5kit/
megatruong
data
upload
22s
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Finished
(1) we iterate over the frames to get a scatter plot of the AV locations
(2) agents types distribution
We will use two classes from the dataset package for this example. Both of them can be iterated and return multi-channel images from the rasterizer along with future trajectories offsets and other information.
* EgoDataset: this dataset iterates over the AV annotations
* AgentDataset: this dataset iterates over other agents annotations
(3) Visualizing the AV
We can convert the AgentDataset's target_position (displacements in meters in agent coordinates) into pixel coordinates in the image space, and call our utility function draw_trajectory (note that you can use this function for the predicted trajectories, as well)
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(4) Visualizing the Agent
We can just replace the EgoDataset with an AgentDataset. Now we're iterating over agents and not the AV anymore
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(5) Visualizing the Scene
Both EgoDataset and AgentDataset provide 2 methods for getting interesting indices:
get_frame_indices returns the indices for a given frame. For the EgoDataset this matches a single observation, while more than one index could be available for the AgentDataset, as that given frame may contain more than one valid agent
get_scene_indices returns indices for a given scene. For both datasets, these might return more than one index
(a) Matplotlib
(i) Bounding boxes on Semantic map
(ii) Bounding boxes on Aerial map
(b) Bokeh
megatruong
data
download
visualize
19s
-
4
50
0
resnet50
true
0.1
meta.json
false
[0.25,0.5]
0.5
py_semantic
[0.5,0.5]
[224,224]
aerial_map/aerial_map.png
semantic_map/semantic_map.pb
true
12
scenes/sample.zarr
16
false
Finished
This run will log all the ipython notebooks in the current folder to be utilized for the org/bookkeeping
megatruong
code
upload
4s
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Finished
This run will log all the ipython notebooks in the current folder to be utilized for the org/bookkeeping
a-sh0ts
code
upload
11s
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Finished
(1) we iterate over the frames to get a scatter plot of the AV locations
(2) agents types distribution
We will use two classes from the dataset package for this example. Both of them can be iterated and return multi-channel images from the rasterizer along with future trajectories offsets and other information.
* EgoDataset: this dataset iterates over the AV annotations
* AgentDataset: this dataset iterates over other agents annotations
(3) Visualizing the AV
We can convert the AgentDataset's target_position (displacements in meters in agent coordinates) into pixel coordinates in the image space, and call our utility function draw_trajectory (note that you can use this function for the predicted trajectories, as well)
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(4) Visualizing the Agent
We can just replace the EgoDataset with an AgentDataset. Now we're iterating over agents and not the AV anymore
(a) Bounding boxes on Semantic map
(b) Bounding boxes on Aerial map
(5) Visualizing the Scene
Both EgoDataset and AgentDataset provide 2 methods for getting interesting indices:
get_frame_indices returns the indices for a given frame. For the EgoDataset this matches a single observation, while more than one index could be available for the AgentDataset, as that given frame may contain more than one valid agent
get_scene_indices returns indices for a given scene. For both datasets, these might return more than one index
(a) Matplotlib
(i) Bounding boxes on Semantic map
(ii) Bounding boxes on Aerial map
(b) Bokeh
a-sh0ts
data
download
visualize
10m 32s
-
4
50
0
resnet50
true
0.1
meta.json
false
[0.25,0.5]
0.5
py_semantic
[0.5,0.5]
[224,224]
aerial_map/aerial_map.png
semantic_map/semantic_map.pb
true
12
scenes/sample.zarr
16
false
Finished
This run uses the ./setup_data.sh file to
(1) Download a sample of scene data
(2) A semantic map to overlay scene data
(3) An aerial map for a higher detailed look of the scene data
(4) The collection of useful configurations used to run the experiments associated with this data
Read more at https://woven-planet.github.io/l5kit/
a-sh0ts
data
upload
2m 25s
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
1-20
of 20