Skip to content
This repository has been archived by the owner on Apr 17, 2023. It is now read-only.

openvinotoolkit/deep-object-reid

 
 

Repository files navigation

Deep Object Reid

Deep Object Reid is a library for deep-learning image classification and object re-identification, written in PyTorch. It is a part of OpenVINO™ Training Extensions.

To reprocude the results mentioned in K. Prokofiev, V. Sovrasov - Combining Metric Learning and Attention Heads For Accurate and Efficient Multilabel Image Classification, go to the multilabel branch.

The project is based on Kaiyang Zhou's Torchreid project.

Its features:

  • multi-GPU training
  • end-to-end training and evaluation
  • incredibly easy preparation of reid and classification datasets
  • multi-dataset training
  • cross-dataset evaluation
  • standard protocol used by most research papers
  • highly extensible (easy to add models, datasets, training methods, etc.)
  • implementations of state-of-the-art and lightweight reid/classification models
  • access to pretrained reid/classification models
  • advanced training techniques such as mutual learning, RSC, SAM, AugMix and many other
  • visualization tools (tensorboard, ranks, activation map, etc.)
  • automated learning rate search and exiting from training (no need to choose epoch number)

How-to instructions: https://github.com/openvinotoolkit/deep-object-reid/blob/ote/docs/user_guide.rst

Original tech report by Kaiyang Zhou and Tao Xiang: https://arxiv.org/abs/1910.10093.

You can find some other research projects that are built on top of Torchreid `here (https://github.com/KaiyangZhou/deep-person-reid/tree/master/projects).

Also if you are planning to perform image classification project, please, refer to OpenVINO™ Training Extensions Custom Image Classification Templates to get a strong baseline for your project. The paper is comming soon.

What's new

  • [June 2021] Added new algorithms, regularization techniques and models for image classification task
  • [May 2020] Added the person attribute recognition code used in `Omni-Scale Feature Learning for Person Re-Identification ICCV'19. See projects/attribute_recognition/.
  • [May 2020] 1.2.1: Added a simple API for feature extraction. See the documentation for the instruction.
  • [Apr 2020] Code for reproducing the experiments of deep mutual learning in the OSNet paper (Supp. B) has been released at projects/DML.
  • [Apr 2020] Upgraded to 1.2.0. The engine class has been made more model-agnostic to improve extensibility. See Engine and ImageSoftmaxEngine for more details. Credit to Dassl.pytorch.
  • [Dec 2019] Our OSNet paper has been updated, with additional experiments (in section B of the supplementary) showing some useful techniques for improving OSNet's performance in practice.
  • [Nov 2019] ImageDataManager can load training data from target datasets by setting load_train_targets=True, and the train-loader can be accessed with train_loader_t = datamanager.train_loader_t. This feature is useful for domain adaptation research.

Installation


Make sure `conda (https://www.anaconda.com/distribution/) is installed.

# cd to your preferred directory and clone this repo
git clone https://github.com/openvinotoolkit/deep-object-reid.git

# create environment
cd deep-object-reid/
conda create --name torchreid python=3.7
conda activate torchreid

# install dependencies
# make sure `which python` and `which pip` point to the correct path
pip install -r requirements.txt

# install torchreid (don't need to re-build it if you modify the source code)
python setup.py develop

Get started


You can use deep-object-reid in your project or use this repository to train proposed models or your own model through configuration file.

Use deep-object-reid in your project

  1. Import torchreid
    import torchreid
  1. Load data manager
datamanager = torchreid.data.ImageDataManager(
    root='reid-data',
    sources='market1501',
    targets='market1501',
    height=256,
    width=128,
    batch_size_train=32,
    batch_size_test=100,
    transforms=['random_flip', 'random_crop']
)
  1. Build model, optimizer and lr_scheduler
model = torchreid.models.build_model(
    name='osnet_ain_x1_0',
    num_classes=datamanager.num_train_pids,
    loss='am_softmax',
    pretrained=True
)

model = model.cuda()

optimizer = torchreid.optim.build_optimizer(
    model,
    optim='adam',
    lr=0.001
)

scheduler = torchreid.optim.build_lr_scheduler(
    optimizer,
    lr_scheduler='single_step',
    stepsize=20
)
  1. Build engine
engine = torchreid.engine.ImageAMSoftmaxEngine(
    datamanager,
    model,
    optimizer=optimizer,
    scheduler=scheduler,
    label_smooth=True
)
  1. Run training and test
engine.run(
    save_dir='log/osnet_ain',
    max_epoch=60,
    eval_freq=10,
    print_freq=10,
    test_only=False
)

Use deep-object-reid through configuration file

modify one of the following config file and run:

python tools/main.py \
--config-file $PATH_TO_CONFIG \
--root $PATH_TO_DATA
--gpu-num $NUM_GPU

See "tools/main.py" and "scripts/default_config.py" for more details.

Evaluation

Evaluation is automatically performed at the end of training. To run the test again using the trained model, do

python tools/eval.py \
--config-file  $PATH_TO_CONFIG\
--root $PATH_TO_DATA \
model.load_weights log/osnet_x1_0_market1501_softmax_cosinelr/model.pth.tar-250 \
test.evaluate True

Cross-domain setting

Suppose you wanna train OSNet on DukeMTMC-reID and test its performance on Market1501, you can do

python scripts/main.py \
--config-file configs/im_osnet_x1_0_softmax_256x128_amsgrad.yaml \
-s dukemtmcreid \
-t market1501 \
--root $PATH_TO_DATA

Here we only test the cross-domain performance. However, if you also want to test the performance on the source dataset, i.e. DukeMTMC-reID, you can set: -t dukemtmcreid market1501, which will evaluate the model on the two datasets separately.

Datasets

Image-reid datasets

Classification dataset

Models

Classification models

ReID-specific models

Face Recognition specific models

Useful links

Citation

If you find this code useful to your research, please cite the following papers.

@article{torchreid,
    title={Torchreid: A Library for Deep Learning Person Re-Identification in Pytorch},
    author={Zhou, Kaiyang and Xiang, Tao},
    journal={arXiv preprint arXiv:1910.10093},
    year={2019}
}

@inproceedings{zhou2019osnet,
    title={Omni-Scale Feature Learning for Person Re-Identification},
    author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
    booktitle={ICCV},
    year={2019}
}

@article{zhou2019learning,
    title={Learning Generalisable Omni-Scale Representations for Person Re-Identification},
    author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
    journal={arXiv preprint arXiv:1910.06827},
    year={2019}
}