We release the WayveScenes101 Dataset to advance research in novel view synthesis and scene reconstruction for autonomous driving applications. This dataset features a diverse collection of high-resolution images and corresponding camera poses, offering scenes across diverse locations, traffic conditions, and environmental conditions.
Our dataset uniquely targets applications of novel view synthesis models for driving scenarios and offers features which are not commonly found in existing datasets. The dataset is offering scenes across diverse locations, environmental conditions, and driving situations. A high frame rate of 10 frames per second for each camera allows for more accurate scene reconstruction, in particular for dynamic scenes. We provide an evaluation protocol for a held-out evaluation camera to specifically measure off-axis reconstruction quality, which is crucial for estimating the generalisation capabilities of novel view synthesis models. Furthermore, we provide metadata for each scene to allow a detailed breakdown of model performance for specific scenarios, such as nighttime or rain.
101 highly diverse driving scenarios of 20 seconds each
101,000 images (101 scenes x 5 cameras x 20 seconds x 10 frames per second)
Scene recording locations: US and UK
5 time-synchronised cameras
Separate held-out evaluation camera for off-axis reconstruction measurement
Scene-level attributes for fine-grained model evaluation
Simple integration with the NerfStudio framework
Our scenes feature a wide range of conditions, including:
- Weather: Sunny, cloudy, overcast, rain, fog
- Road Types: Highway, urban, residential, rural, roadworks
- Time of Day: Daytime, low sun, nighttime
- Dynamic Agents: Vehicles, pedestrians, cyclists, animals
- Dynamic Illumination: Traffic lights, exposure changes, lens flare, reflections
We made our WayveScenes101 dataset available on Google Drive. You may decide to download all of the provided scenes or only a subset of them.
If you wish to download the full dataset in one go, please run the following commands:
bash download.sh /path/to/wayve_scenes_101
For instructions on how to view and inspect the dataset, please follow the tutorial in tutorial/dataset_usage.ipynb
.
Note: This project is currently supported only on Linux operating systems.
This guide assumes that you have Anaconda or Miniconda already installed on your system. If not, please install Anaconda from here or Miniconda from here.
To set up your environment, follow these steps:
-
Clone the repository
First, clone the repository to your local machine using Git:
git clone https://github.com/wayveai/wayve_scenes.git cd wayve_scenes
-
Prepare the Conda Environment
Create a new Conda environment (defaults to Python3.10). This step may take a while.
conda env create -f environment.yml
Activate the newly created environment:
conda activate wayve_scenes_env
Finally, we want to also install pytorch3d
pip install git+https://github.com/facebookresearch/pytorch3d.git
-
Install the
wayve_scenes
Package
Once your environment is created and activated, you can install the project by running:
cd src
python -m pip install -e .
This command installs the package in editable mode (-e
), which means changes to the source files will immediately affect the installed package without needing a reinstall.
After installation, you can verify that the package has been installed correctly by running:
python -c "import wayve_scenes;"
If no output is printed, the wayve_scenes
package can be imported and you are good to go!
For references on how to evaluate novel view synthesis models trained on scenes from our dataset, please follow the tutorial in tutorial/evaluate.ipynb
.
Our recording rig has 5 different time-synchronised cameras. The camera names are:
front-forward
left-forward
right-forward
left-backward
right-backward
The camera arrangement on the vehicle is illustrated below.
- Shutter: Rolling Shutter
- Distortion model: Fisheye Distortion
- Resolution: 1920 x 1080 pixels
- Temporal resolution: 10 Hz
- Calibration: We provide both extrinsic and intrinsic camera calibration for all scenes as part of the COLMAP files. We also provide metric camera-to-camera distances in
data/baselines.json
We provide binary masks for all images in the dataset, indicating the position of annotations used for blurring of license plates and faces for data anonymisation. These masks also mark regions where the ego-vehicle is visible in the camera images.
If you encounter issues or need support, please report them as an issue in this repository, or feel free to submit a pull request.
We're planning to host a public challenge with new scenes and an evaluation server soon. Stay tuned!
We release the first version of the dataset.
@article{zurn2024wayvescenes101,
title={WayveScenes101: A Dataset and Benchmark for Novel View Synthesis in Autonomous Driving},
author={Z{\"u}rn, Jannik and Gladkov, Paul and Dudas, Sof{\'\i}a and Cotter, Fergal and Toteva, Sofi and Shotton, Jamie and Simaiaki, Vasiliki and Mohan, Nikhil},
journal={arXiv preprint arXiv:2407.08280},
year={2024}
}
The following table is necessary for this dataset to be indexed by search engines such as Google Dataset Search.
property | value | ||||||
---|---|---|---|---|---|---|---|
name | WayveScenes101 |
||||||
alternateName | WayveScenes101 |
||||||
url | https://wayve.ai/science/wayvescenes101/ |
||||||
description | An autonomous driving dataset made for novel view synthesis. |
||||||
provider |
|
||||||
license |
|