This is an official pytorch implementation of Open Vocabulary Object Detection with Pseudo Bounding-Box Labels.
UBUNTU="18.04"
CUDA="11.0"
CUDNN="8"
conda create --name ovd
conda activate ovd
cd $INSTALL_DIR
bash ovd_install.sh
git clone https://github.com/NVIDIA/apex.git
cd apex
python setup.py install --cuda_ext --cpp_ext
cd ../
cuda_dir="maskrcnn_benchmark/csrc/cuda"
perl -i -pe 's/AT_CHECK/TORCH_CHECK/' $cuda_dir/deform_pool_cuda.cu $cuda_dir/deform_conv_cuda.cu
python setup.py build develop
- Follow steps in datasets/README.md for data preparation
- Download our pre-trained model and fine-tuned model
python -m torch.distributed.launch --nproc_per_node=8 tools/test_net.py \
--config-file configs/eval.yaml \
MODEL.WEIGHT $PATH_TO_FINAL_MODEL \
OUTPUT_DIR $OUTPUT_DIR
- For LVIS, use their official API to get evaluated numbers
python evaluate_lvis_official.py --coco_anno_path datasets/lvis_v0.5_val_all_clipemb.json \
--result_dir $OUTPUT_DIR/inference/lvis_v0.5_val_all_cocostyle/
python -m torch.distributed.launch --nproc_per_node=16 tools/train_net.py --distributed \
--config-file configs/pretrain_1m.yaml \
OUTPUT_DIR $OUTPUT_DIR
python -m torch.distributed.launch --nproc_per_node=8 tools/train_net.py --distributed \
--config-file configs/finetune.yaml \
MODEL.WEIGHT $PATH_TO_PRETRAIN_MODEL \
OUTPUT_DIR $OUTPUT_DIR
conda create --name gen_plabels
conda activate gen_plabels
bash gen_plabel_install.sh
- Referring examples/README.md for data preparation
- Get pseudo labels based on ALBEF
python pseudo_bbox_generation.py
- Organize dataset in COCO format
python prepare_coco_dataset.py
- Extract text embedding using CLIP
# pip install git+https://github.com/openai/CLIP.git
python prepare_clip_embedding_for_open_vocab.py
- Check your final pseudo labels by visualization
python visualize_coco_style_dataset.py
- If you find this code helpful, please cite our paper:
@article{gao2021towards,
title={Open Vocabulary Object Detection with Pseudo Bounding-Box Labels},
author={Gao, Mingfei and Xing, Chen and Niebles, Juan Carlos and Li, Junnan and Xu, Ran and Liu, Wenhao and Xiong, Caiming},
journal={arXiv preprint arXiv:2111.09452},
year={2021}
}
- Please send an email to mingfei.gao@salesforce.com or cxing@salesforce.com if you have questions.
- Files obtained from maskrcnn_benchmark are covered under the MIT license.