Code for "BiCo-Net: Regress Globally, Match Locally for Robust 6D Pose Estimation" [Arxiv][Paper]
This code has been tested with
- Open3D==0.9.0.0
- Python==3.7.12
- OpenCV==4.1
- Pytorch==1.6.0
- CUDA==10.1
- YCB-Video dataset [link]
- Preprocessed LineMOD dataset provided by DenseFusion [link]
- Pretrained models of BiCo-Net [link]
- Pred mask of PVN3D on YCB-Video dataset [link]
- Pred mask of HybridPose on LineMOD Occlusion dataset [link]
Command for training BiCo-Net:
python train.py --dataset ycbv --dataset_root path_to_ycbv_dataset
python train.py --dataset linemod --dataset_root path_to_lm_dataset
For lmo dataset, download the VOC2012 dataset and run:
python train.py --dataset lmo --dataset_root path_to_lm_dataset --bg_img path_to_voc2012_dataset
Evaluate the results of BiCo-Net reported in the paper:
python eval_ycbv.py --dataset_root path_to_ycbv_dataset --pred_mask path_to_pvn3d_pred_mask
python eval_lm.py --dataset_root path_to_lm_dataset
python eval_lmo.py --dataset_root path_to_lm_dataset --pred_mask path_to_hybridpose_pred_mask
Our implementation leverages the code from DenseFusion.
Our code is released under MIT License (see LICENSE file for details).
If you find our work useful in your research, please consider citing:
@inproceedings{Xu2022BiCoNetRG,
title={BiCo-Net: Regress Globally, Match Locally for Robust 6D Pose Estimation},
author={Zelin Xu and Yichen Zhang and Ke Chen and Kui Jia},
booktitle={IJCAI},
year={2022}
}