Skip to content

Latest commit

 

History

History
55 lines (30 loc) · 2.64 KB

model_zoo.md

File metadata and controls

55 lines (30 loc) · 2.64 KB

Model Zoo

Common settings

  • We use distributed training.
  • For fair comparison with other codebases, we report the GPU memory as the maximum value of torch.cuda.max_memory_allocated() for all 8 GPUs. Note that this value is usually less than what nvidia-smi shows.
  • We report the inference time as the total time of network forwarding and post-processing, excluding the data loading time. Results are obtained with the script benchmark.py which computes the average time on 2000 images.

Baselines

SECOND

Please refer to SECOND for details. We provide SECOND baselines on KITTI and Waymo datasets.

PointPillars

Please refer to PointPillars for details. We provide pointpillars baselines on KITTI, nuScenes, Lyft, and Waymo datasets.

Part-A2

Please refer to Part-A2 for details.

VoteNet

Please refer to VoteNet for details. We provide VoteNet baselines on ScanNet and SUNRGBD datasets.

Dynamic Voxelization

Please refer to Dynamic Voxelization for details.

MVXNet

Please refer to MVXNet for details.

RegNetX

Please refer to RegNet for details. We provide pointpillars baselines with RegNetX backbones on nuScenes and Lyft datasets currently.

nuImages

We also support baseline models on nuImages dataset. Please refer to nuImages for details. We report Mask R-CNN, Cascade Mask R-CNN and HTC results currently.

H3DNet

Please refer to H3DNet for details.

3DSSD

Please refer to 3DSSD for details.

CenterPoint

Please refer to CenterPoint for details.

SSN

Please refer to SSN for details. We provide pointpillars with shape-aware grouping heads used in SSN on the nuScenes and Lyft dataset currently.