Robust Collaborative 3D Object Detection in Presence of Pose Errors
Paper | Video | Readme in Feishu
HEAL is accepted to ICLR 2024. We implement a unified and integrated multi-agent collaborative perception framework for LiDAR-based, camera-based and heterogeneous setting! See HEAL GitHub.
Camera-based collaborative perception support!
We release the multi-agent camera-based detection code, based on Lift-Splat-Shoot. Support OPV2V, V2XSet and DAIR-V2X-C dataset.
LiDAR's feature map fusion method can seamlessly adapt to camera BEV feature. Support CoAlign's multiscale fusion, V2XViT, V2VNet, Self-Att, FCooper, DiscoNet(w.o. KD). Please feel free to browse our repo. Example yamls are listed in this folder: CoAlign/opencood/hypes_yaml/opv2v/camera_no_noise
-
Modality Support
- LiDAR
- Camera 🌟
-
Dataset Support
- OPV2V
- V2X-Sim 2.0 🌟
- DAIR-V2X 🌟
- V2XSet
-
SOTA collaborative perception method support
-
Visualization support
- BEV visualization
- 3D visualization 🌟
-
1-round/2-round communication support
- transform point cloud first (2-round communication)
- warp feature map (1-round communication, by default in this repo. 🌟)
-
Pose error simulation support
Please visit the feishu docs CoAlign Installation Guide for details!
Or you can refer to OpenCOOD data introduction and OpenCOOD installation guide to prepare data and install CoAlign. The installation is totally the same as OpenCOOD, except some dependent packages required by CoAlign.
mkdir a dataset
folder under CoAlign. Put your OPV2V, V2X-Sim, V2XSet, DAIR-V2X data in this folder. You just need to put in the dataset you want to use.
CoAlign/dataset
.
├── my_dair_v2x
│ ├── v2x_c
│ ├── v2x_i
│ └── v2x_v
├── OPV2V
│ ├── additional
│ ├── test
│ ├── train
│ └── validate
├── V2XSET
│ ├── test
│ ├── train
│ └── validate
├── v2xsim2-complete
│ ├── lidarseg
│ ├── maps
│ ├── sweeps
│ └── v1.0-mini
└── v2xsim2_info
├── v2xsim_infos_test.pkl
├── v2xsim_infos_train.pkl
└── v2xsim_infos_val.pkl
Note that
- *.pkl file in
v2xsim2_info
can be found in Google Drive - use our complemented annotation for DAIR-V2X in
my_dair_v2x
Originally DAIR-V2X only annotates 3D boxes within the range of camera's view in vehicle-side. We supplement the missing 3D box annotations to enable the 360 degree detection. With fully complemented vehicle-side labels, we regenerate the cooperative labels for users, which follow the original cooperative label format.
Original Annotations | Complemented Annotations |
---|---|
Download: Google Drive
Website: Website
Download coalign_precalc and save it to opencood/logs
Download them and save them to opencood/logs
@inproceedings{lu2023robust,
title={Robust collaborative 3d object detection in presence of pose errors},
author={Lu, Yifan and Li, Quanhao and Liu, Baoan and Dianati, Mehrdad and Feng, Chen and Chen, Siheng and Wang, Yanfeng},
booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
pages={4812--4818},
year={2023},
organization={IEEE}
}
This project is impossible without the code of OpenCOOD, g2opy and d3d!
Thanks again to @DerrickXuNu for the great code framework.