Skip to content

[BMVC 2022 workshop] Greedy Grid Search: A 3D Registration Baseline

License

Notifications You must be signed in to change notification settings

DavidBoja/greedy-grid-search

Repository files navigation

Greedy Grid Search: A 3D Registration Baseline

PWC PWC PWC


⚠️⚠️⚠️ An improved version of the baseline has been introduced here. Please use the new baseline instead of this one. ⚠️⚠️⚠️


This Github presents the code for the following paper: Challenging the Universal Representation of Deep Models for 3D Point Cloud Registration presented at the BMVC 2022 workshop (URCV 22).

TL;DR

We analyze the problem of 3D registration and highlight 2 main issues:

  1. Learning-based methods struggle to generalize onto unseen data
  2. The current 3D registration benchmark datasets suffer from data variability

We address these problems by:

  1. Creating a simple baseline model that outperforms most state-of-the-art learning-based methods
  2. Creating a novel 3D registration benchmark FPv1 (called FAUST-partial in the paper) based on the FAUST dataset

Data

3DMatch

Download the testing examples from here under the title Geometric Registration Benchmark --> Downloads. There are 8 scenes that are used for testing. In total, there are 16 folders, two for each scene with names {folder_name} and {folder_name}-evaluation:

7-scenes-redkitchen
7-scenes-redkitchen-evaluation
sun3d-home_at-home_at_scan1_2013_jan_1
sun3d-home_at-home_at_scan1_2013_jan_1-evaluation
sun3d-home_md-home_md_scan9_2012_sep_30
sun3d-home_md-home_md_scan9_2012_sep_30-evaluation
sun3d-hotel_uc-scan3
sun3d-hotel_uc-scan3-evaluation
sun3d-hotel_umd-maryland_hotel1
sun3d-hotel_umd-maryland_hotel1-evaluation
sun3d-hotel_umd-maryland_hotel3
sun3d-hotel_umd-maryland_hotel3-evaluation
sun3d-mit_76_studyroom-76-1studyroom2
sun3d-mit_76_studyroom-76-1studyroom2-evaluation
sun3d-mit_lab_hj-lab_hj_tea_nov_2_2012_scan1_erika
sun3d-mit_lab_hj-lab_hj_tea_nov_2_2012_scan1_erika-evaluation

We use the overlaps from PREDATOR [1] (found in data/overlaps) to filter the data and use only those with overlap > 30%.


KITTI

Download the testing data from here under Download odometry data set (velodyne laser data, 80 GB). 3 scenes are used for testing:

08
09
10

Download the test.pkl from GeoTransformer [3] here and put it in the same directory where the scenes are located.


ETH

Download the testing data from here. There are 4 scenes that are used for testing:

gazeebo_summer
gazeebo_winter
wood_autumn
wood_summer

We use the overlaps from Perfect Match [2] (found in data/overlaps) to filter the data and use only those with overlap > 30%. We obtain the overlaps from their overlapMatrix.csv in each scene.


FPv1 (called FAUST-partial in the paper)

Download the FAUST scans from here. There are 100 scans in the training dataset named tr_scan_xxx.ply that are used for the registration benchmark. To use the same benchmark as in the paper, download the folder FPv1 from here. To create your own benchmark, we provide a toolbox github.com/DavidBoja/FPv1


Running baseline

We provide a Dockerfile to facilitate running the code. Run in terminal:

cd docker
sh docker_build.sh
sh docker_run.sh CODE_PATH DATA_PATH

by adjusting the CODE_PATH and DATA_PATH. These are paths to volumes that are attached to the container. The CODE_PATH is the path to the clone of this github repository, while the DATA_PATH is the location of all the data from data section in this documentation.

You can attach to the container using

docker exec -it ggs-container /bin/bash

Next, change the DATASET-PATH for each dataset in config.yaml.

Next, once inside the container, you can run:

python register.py -D xxx

where xxx can be 3DMatch, KITTI, ETH or FP (indicating FPv1). The script saves the registration results in results/timestamp, where timestamp changes according to the time of script execution.


Running refinement

To refine the results from the baseline registration, we provide a script that runs one of the three ICP algorithms:

  • p2point icp
  • p2plane icp
  • generalized icp

To choose between the three algorithms and set their parameters, adjust the REFINE option in config.yaml

python refine.py -R results/timestamp

where timestamp should be changed to the baseline results path you want to refine.


Evaluate

Similarly to the refinement above, to evaluate the registration you can run:

python evaluate.py -R results/timestamp

where timestamp should be changed accordingly to indicate your results.


Running demo

We additionally provide a demo script demo.py for registering two arbitrary point clouds. First, adjust the parameters in config.yaml under the DEMO category. Next, you can run

python demo.py  --pc_target_pth pc_target.ply --pc_source_pth pc_source.ply

where you need to specify the target and source point cloud paths.


Citation

If you use our work, please reference our paper:

@inproceedings{Bojanić-BMVC22-workshop,
   title = {Challenging the Universal Representation of Deep Models for 3D Point Cloud Registration},
   author = {Bojani\'{c}, David and Bartol, Kristijan and Forest, Josep and Gumhold, Stefan and Petkovi\'{c}, Tomislav and Pribani\'{c}, Tomislav},
   booktitle={BMVC 2022 Workshop Universal Representations for Computer Vision},
   year = {2022}
   url={https://openreview.net/forum?id=tJ5jWBIAbT}
}

ToDo

  • Update documentation
  • Update data documentation
  • Add results section to documentation
  • Demo script

References

[1] PREDATOR
[2] Perfect Match
[3] GeoTransformer