Skip to content

Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"

License

Notifications You must be signed in to change notification settings

amazon-science/prompt-pretraining

Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition

"Scaling up prompt learning on ImageNet-21K achieves SOTA on 21 downstream datasets."

Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition
Shuhuai Ren, Aston Zhang, Yi Zhu, Shuai Zhang, Shuai Zheng, Mu Li, Alex Smola, Xu Sun

paper Colab

PWC PWC PWC PWC PWC PWC PWC

🚀 News

  • (Jul 11, 2023)
    • Inference demo for object detection in Jupyter.
  • (May 31, 2023)
    • Inference demo for image classification in Google Colab.
  • (Mar 22, 2023)
    • Codes for prompt pretraining (POMP) on ImageNet-21K, cross-dataset and cross-task evaluation.
    • Checkpoints of pre-trained POMP prompts, segmentation backbones, and detection backbones.

Highlights

main figure

Main Contributions

  1. We introduce a prompt pre-training method POMP, which fisrt enables prompt learning on large-scale datasets like ImageNet-21K with over twenty-thousand classes.
  2. POMP is memory and computation efficient. Compared with previous methods like CoOp, it achieves comparable accuracy on ImageNet-1K with only 19% GPU memory and 50% training time.
  3. POMP achieves new SOTAs on various open-vocabulary visual recognition datasets and tasks.

Installation

For installation and other package requirements, please follow the instructions detailed in INSTALL.md.

Data preparation

Please follow the instructions at DATASETS.md to prepare all datasets.

Pre-trained Models

Please follow the instructions at MODELS.md to prepare all pre-trained models.

Training and Evaluation

Please refer to the RUN.md for detailed instructions on training, evaluating and reproducing the results.


Contact

If you have any questions, please feel free to create an issue on this repository.

Citation

If you find this code useful for your research, please consider citing:

@article{ren2023pomp,
  title={Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition},
  author={Ren, Shuhuai and Zhang, Aston and Zhu, Yi and Zhang, Shuai and Zheng, Shuai and Li, Mu and Smola, Alex and Sun, Xu},
  journal={arXiv preprint arXiv:2304.04704},
  year={2023}
}

Acknowledgements

Our code is based on CoOp, MaPLe, Dassl, Detic and ZSSeg repositories. We thank the authors for releasing their code.

About

Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •