Skip to content

Latest commit

 

History

History
 
 

Chapter03-重要数据集和Benchmark介绍

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

nuScenes: A Multimodal Dataset for Autonomous Driving

https://www.youtube.com/watch?v=C6KbbndonGg

https://www.nuscenes.org/nuscenes#download

Viz

Occ3D datasets

  • Occ3D
  • OpenOccupancy
  • SurroundOcc

3D Occupancy Prediction Challenge at CVPR 2023 (Server remains active)

Please refer to this link. If you wish to add new / modify results to the leaderboard, please drop us an email to contact@opendrivelab.com

Top 10 at a glance by June 10 2023. teaser

CVPR2023 Occupancy Prediction比赛方案总结

Task Definition

Given images from multiple cameras, the goal is to predict the current occupancy state and semantics of each voxel grid in the scene. The voxel state is predicted to be either free or occupied. If a voxel is occupied, its semantic class needs to be predicted, as well. Besides, we also provide a binary observed/unobserved mask for each frame. An observed voxel is defined as an invisible grid in the current camera observation, which is ignored in the evaluation stage.

Rules for Occupancy Challenge

  • We allow using annotations provided in the nuScenes dataset, and during inference, the input modality of the model should be camera only.
  • No future frame is allowed during inference.
  • In order to check the compliance, we will ask the participants to provide technical reports to the challenge committee and the participant will be asked to provide a public talk about the method after winning the award.
  • Every submission provides method information. We encourage publishing code, but do not make it a requirement.
  • Each team can have at most one account on the evaluation server. Users that create multiple accounts to circumvent the rules will be excluded from the challenge.
  • Each team can submit at most three results during the challenge.
  • Faulty submissions that return an error on Eval AI do not count towards the submission limit.
  • Any attempt to circumvent these rules will result in a permanent ban of the team or company from the challenge.

(back to top)

Evaluation Metrics

Leaderboard ranking for this challenge is by the intersection-over-union (mIoU) over all classes.

mIoU

Let $C$ be he number of classes.

$$ mIoU=\frac{1}{C}\displaystyle \sum_{c=1}^{C}\frac{TP_c}{TP_c+FP_c+FN_c}, $$

where $TP_c$ , $FP_c$ , and $FN_c$ correspond to the number of true positive, false positive, and false negative predictions for class $c_i$.

(back to top)

Data

Figure 1. Semantic labels (left), visibility masks in the LiDAR (middle) and the camera (right) view. Grey voxels are unobserved in LiDAR view and white voxels are observed in the accumulative LiDAR view but unobserved in the current camera view.

Click here to see more details.

Occupancy Datasets

  • Occ3D: A Large-Scale 3D Occupancy Prediction Benchmark for Autonomous Driving [paper] [Github]
  • OpenOccupancy: A Large Scale Benchmark for Surrounding Semantic Occupancy Perception [paper] [Github]
  • SurroundOcc [paper] [Github]
  • Occupancy Dataset for nuScenes [Github]
  • SSCBench: A Large-Scale 3D Semantic Scene Completion Benchmark for Autonomous Driving [paper] [Github]
  • Scene as Occupancy [paper] [Github]