Skip to content
/ SFA3D Public
forked from maudzung/SFA3D

Super Fast and Accurate 3D Object Detection based on 3D LiDAR Point Clouds (The PyTorch implementation)

License

Notifications You must be signed in to change notification settings

CarkusL/SFA3D

 
 

Repository files navigation

Super Fast and Accurate 3D Object Detection based on 3D LiDAR Point Clouds

python-image pytorch-image


Features

  • Super fast and accurate 3D object detection based on LiDAR
  • Fast training, fast inference
  • An Anchor-free approach
  • No Non-Max-Suppression
  • Support distributed data parallel training
  • Release pre-trained models

The technical details are described here

Update 2020.09.06: Add ROS source code. The great work has been done by @AhmedARadwan. The implementation is here

Demonstration (on a single GTX 1080Ti)

demo

Youtube link

2. Getting Started

2.1. Requirement

The instructions for setting up a virtual environment is here.

git clone https://github.com/maudzung/SFA3D.git SFA3D
cd SFA3D/
pip install -r requirements.txt

2.2. Data Preparation

Download the 3D KITTI detection dataset from here.

The downloaded data includes:

  • Velodyne point clouds (29 GB)
  • Training labels of object data set (5 MB)
  • Camera calibration matrices of object data set (16 MB)
  • Left color images of object data set (12 GB) (For visualization purpose only)

Please make sure that you construct the source code & dataset directories structure as below.

2.3. How to run

2.3.1. Visualize the dataset

To visualize 3D point clouds with 3D boxes, let's execute:

cd sfa/data_process/
python kitti_dataset.py

2.3.2. Inference

The pre-trained model was pushed to this repo.

python test.py --gpu_idx 0 --peak_thresh 0.2

2.3.3. Making demonstration

python demo_2_sides.py --gpu_idx 0 --peak_thresh 0.2

The data for the demonstration will be automatically downloaded by executing the above command.

2.3.4. Training

2.3.4.1. Single machine, single gpu
python train.py --gpu_idx 0
2.3.4.2. Distributed Data Parallel Training
  • Single machine (node), multiple GPUs
python train.py --multiprocessing-distributed --world-size 1 --rank 0 --batch_size 64 --num_workers 8
  • Two machines (two nodes), multiple GPUs

    • First machine
    python train.py --dist-url 'tcp://IP_OF_NODE1:FREEPORT' --multiprocessing-distributed --world-size 2 --rank 0 --batch_size 64 --num_workers 8
    
    • Second machine
    python train.py --dist-url 'tcp://IP_OF_NODE2:FREEPORT' --multiprocessing-distributed --world-size 2 --rank 1 --batch_size 64 --num_workers 8
    

2.3.5 Evaluation

  • Do Evaluation on kitti val dataset
python  train.py --evaluate  --gpu_idx 0 --pretrained_path=../checkpoints/fpn_resnet_18/fpn_resnet_18_epoch_300.pth
  • Evaluation result
Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:96.57, 89.17, 89.41
bev  AP:97.52, 89.62, 89.78
3d   AP:88.09, 87.86, 88.09
aos  AP:60.28, 55.63, 55.04
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:96.57, 89.17, 89.41
bev  AP:98.03, 89.94, 90.09
3d   AP:98.01, 89.94, 90.09
aos  AP:60.28, 55.63, 55.04
Pedestrian AP(Average Precision)@0.50, 0.50, 0.50:
bbox AP:65.38, 64.95, 63.96
bev  AP:68.14, 69.36, 65.41
3d   AP:66.44, 62.36, 62.77
aos  AP:32.13, 30.95, 30.21
Pedestrian AP(Average Precision)@0.50, 0.25, 0.25:
bbox AP:65.38, 64.95, 63.96
bev  AP:88.43, 88.61, 88.59
3d   AP:88.29, 88.45, 88.48
aos  AP:32.13, 30.95, 30.21
Cyclist AP(Average Precision)@0.50, 0.50, 0.50:
bbox AP:89.62, 87.64, 87.70
bev  AP:82.11, 75.41, 75.64
3d   AP:80.09, 74.31, 74.45
aos  AP:54.37, 52.01, 51.44
Cyclist AP(Average Precision)@0.50, 0.25, 0.25:
bbox AP:89.62, 87.64, 87.70
bev  AP:96.04, 88.67, 88.78
3d   AP:96.04, 88.67, 88.78
aos  AP:54.37, 52.01, 51.44

The original evaluation code is from here.

Tensorboard

  • To track the training progress, go to the logs/ folder and
cd logs/<saved_fn>/tensorboard/
tensorboard --logdir=./

Contact

If you think this work is useful, please give me a star!
If you find any errors or have any suggestions, please contact me (Email: nguyenmaudung93.kstn@gmail.com).
Thank you!

Citation

@misc{Super-Fast-Accurate-3D-Object-Detection-PyTorch,
  author =       {Nguyen Mau Dung},
  title =        {{Super-Fast-Accurate-3D-Object-Detection-PyTorch}},
  howpublished = {\url{https://github.com/maudzung/Super-Fast-Accurate-3D-Object-Detection}},
  year =         {2020}
}

References

[1] CenterNet: Objects as Points paper, PyTorch Implementation
[2] RTM3D: PyTorch Implementation
[3] Libra_R-CNN: PyTorch Implementation

The YOLO-based models with the same BEV maps input:
[4] Complex-YOLO: v4, v3, v2

3D LiDAR Point pre-processing:
[5] VoxelNet: PyTorch Implementation

Folder structure

${ROOT}
└── checkpoints/
    ├── fpn_resnet_18/    
        ├── fpn_resnet_18_epoch_300.pth
└── dataset/    
    └── kitti/
        ├──ImageSets/
        │   ├── test.txt
        │   ├── train.txt
        │   └── val.txt
        ├── training/
        │   ├── image_2/ (left color camera)
        │   ├── calib/
        │   ├── label_2/
        │   └── velodyne/
        └── testing/  
        │   ├── image_2/ (left color camera)
        │   ├── calib/
        │   └── velodyne/
        └── classes_names.txt
└── sfa/
    ├── config/
    │   ├── train_config.py
    │   └── kitti_config.py
    ├── data_process/
    │   ├── kitti_dataloader.py
    │   ├── kitti_dataset.py
    │   └── kitti_data_utils.py
    ├── models/
    │   ├── fpn_resnet.py
    │   ├── resnet.py
    │   └── model_utils.py
    └── utils/
    │   ├── demo_utils.py
    │   ├── evaluation_utils.py
    │   ├── logger.py
    │   ├── misc.py
    │   ├── torch_utils.py
    │   ├── train_utils.py
    │   └── visualization_utils.py
    ├── demo_2_sides.py
    ├── demo_front.py
    ├── test.py
    └── train.py
├── README.md 
└── requirements.txt

About

Super Fast and Accurate 3D Object Detection based on 3D LiDAR Point Clouds (The PyTorch implementation)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%