Officail PyTorch implementation of the paper: "MonoGround: Detecting Monocular 3D Objects from the Ground".
Please see INSTALL.md.
To verify the results of the trained model, please run:
python tools/plain_train_net.py --batch_size 8 --config runs/monoground.yaml --ckpt /path/to/model --eval --output ./tmp
To train the model by yourself, please run:
python tools/plain_train_net.py --batch_size 8 --config runs/monoground.yaml --output ./tmp
We provide the trained model on KITTI and corresponding logs.
Model | Log | AP easy | AP mod | AP hard |
---|---|---|---|---|
Google/Baidu | Google/Baidu | 25.24 | 18.69 | 15.58 |
We also tested our method on the NuScenes dataset. Please see NuScenes.md for details.
If you find our work useful in your research, please consider citing:
@inproceedings{qin2022monoground,
title={MonoGround: Detecting Monocular 3D Objects From the Ground},
author={Qin, Zequn and Li, Xi},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={3793--3802},
year={2022}
}
The code is heavily borrowed from MonoFlex and SMOKE and thanks for their contribution.