Skip to content

Latest commit

 

History

History

dino

DINO with TransNeXt backbone on COCO

Model Zoo

COCO object detection results using the DINO method:

Backbone Pretrained Model scales epochs box mAP #Params Download Config Log
TransNeXt-Tiny ImageNet-1K 4scale 12 55.1 47.8M model config log
TransNeXt-Tiny ImageNet-1K 5scale 12 55.7 48.1M model config log
TransNeXt-Small ImageNet-1K 5scale 12 56.6 69.6M model config log
TransNeXt-Base ImageNet-1K 5scale 12 57.1 110M model config log

Requirements

pip install -r requirements.txt

Data preparation

cd /path/to/current_folder
ln -s /your/path/to/coco-dataset ./data

Evaluation

To evaluate DINO models with TransNeXt backbone on COCO val, you can use the following command:

bash dist_test.sh <config-file> <checkpoint-path> <gpu-num>

For example, to evaluate the TransNeXt-Tiny under 4-scale settings on a single GPU:

bash dist_test.sh ./configs/dino-4scale_transnext_tiny-12e_coco.py /path/to/checkpoint_file 1

For example, to evaluate the TransNeXt-Tiny under 4-scale settings on 8 GPUs:

bash dist_test.sh ./configs/dino-4scale_transnext_tiny-12e_coco.py /path/to/checkpoint_file 8

Training

In order to train DINO models with TransNeXt backbone on the COCO dataset, first, you need to fill in the path of your downloaded pretrained checkpoint in ./configs/<config-file>. Specifically, change it to:

pretrained=<path-to-checkpoint>, 

After setting up, to train TransNeXt on COCO dataset, you can use the following command:

bash dist_train.sh <config-file> <gpu-num> 

For example, to train the TransNeXt-Tiny under 4-scale settings on 8 GPUs, with a total batch-size of 16:

bash dist_train.sh ./configs/dino-4scale_transnext_tiny-12e_coco.py 8

Acknowledgement

The released script for Object Detection with TransNeXt is built based on the MMDetection and timm library.

License

This project is released under the Apache 2.0 license. Please see the LICENSE file for more information.

Citation

If you find our work helpful, please consider citing the following bibtex. We would greatly appreciate a star for this project.

@InProceedings{shi2023transnext,
  author    = {Dai Shi},
  title     = {TransNeXt: Robust Foveal Visual Perception for Vision Transformers},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month     = {June},
  year      = {2024},
  pages     = {17773-17783}
}