Skip to content

A PyTorch implementation of the "Tracking-by-Animation" algorithm published at CVPR 2019.

Notifications You must be signed in to change notification settings

zhen-he/tracking-by-animation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers

NOTES:

  • The DukeMTMC's official website was closed in 05/2019. It might be recovered in the future.

1. Results

1.1 MNIST-MOT

a) Qualitative results


View on YouTube
Left: input. Middle: reconstruction. Right: memory (Row 1), attention (Row 2), and output (Row 3).

b) Quantitative results

Configuration IDF1↑ IDP↑ IDR↑ MOTA↑ MOTP↑ FAF↓ MT↓ ML↓ FP↓ FN↓ IDS↓ Frag↓
TBA 99.6 99.6 99.6 99.5 78.4 0 978 0 49 49 22 7

1.2 Sprites-MOT

a) Qualitative results


View on YouTube
Left: input. Middle: reconstruction. Right: memory (Row 1), attention (Row 2), and output (Row 3).

b) Quantitative results

Configuration IDF1↑ IDP↑ IDR↑ MOTA↑ MOTP↑ FAF↓ MT↓ ML↓ FP↓ FN↓ IDS↓ Frag↓
TBA 99.2 99.3 99.2 99.2 79.1 0.01 985 1 60 80 30 22

1.3 DukeMTMC

a) Qualitative results


View on YouTube
Rows 1 and 4: input. Rows 2 and 5: reconstruction. Rows 3 and 6: output.

b) Quantitative results

Configuration IDF1↑ IDP↑ IDR↑ MOTA↑ MOTP↑ FAF↓ MT↓ ML↓ FP↓ FN↓ IDS↓ Frag↓
TBA 82.4 86.1 79.0 79.6 80.4 0.09 1,026 46 64,002 151,483 875 1,481

Quantitative results are hosted at https://motchallenge.net/results/DukeMTMCT, where our TBA tracker is named as ‘MOT_TBA’.

2. Requirements

  • Python 3.7
  • PyTorch 1.2/1.3/1.4
  • py-motmetrics (to evaluate tracking performances)

3. Usage

3.1 Generate training data

Enter the project root directory cd path/to/tba.

For mnist and sprite:

python scripts/gen_mnist.py     # for mnist
python scripts/gen_sprite.py    # for sprite

For duke:

bash scripts/mts2jpg.sh 1                     # convert .mts files to .jpg files, please run over all cameras by setting the last argument to 1, 2, ..., 8
./scripts/build_imbs.sh                       # build imbs for background extraction
cd imbs/build
./imbs -c 1                                   # run imbs, please run over all cameras by setting c = 1, 2, ..., 8
cd ../..
python scripts/gen_duke_bb.py --c 1           # generate bounding box masks, please run over all cameras by setting c = 1, 2, ..., 8
python scripts/gen_duke_bb_bg.py --c 1        # refine background images, please run over all cameras by setting c = 1, 2, ..., 8
python scripts/gen_duke_roi.py                # generate roi masks
python scripts/gen_duke_processed.py --c 1    # resize images, please run over all cameras by setting c = 1, 2, ..., 8
python scripts/gen_duke.py                    # generate .pt files for training

3.2 Train the model

python run.py --task mnist     # for mnist
python run.py --task sprite    # for sprite
python run.py --task duke      # for duke

Alternatively, you can skip this stage by using our pre-trained models (under the result/ directory).

3.3 Show training curves

python scripts/show_curve.py --task mnist     # for mnist
python scripts/show_curve.py --task sprite    # for sprite
python scripts/show_curve.py --task duke      # for duke

3.4 Evaluate tracking performances

a) Generate test data

python scripts/gen_mnist.py --metric 1         # for mnist
python scripts/gen_sprite.py --metric 1        # for sprite
python scripts/gen_duke.py --metric 1 --c 1    # for duke, please run over all cameras by setting c = 1, 2, ..., 8

b) Generate tracking results

python run.py --init sp_latest.pt --metric 1 --task mnist                     # for mnist
python run.py --init sp_latest.pt --metric 1 --task sprite                    # for sprite
python run.py --init sp_latest.pt --metric 1 --task duke --subtask camera1    # for duke, please run all subtasks from camera1 to camera8

c) Convert the results into .txt

python scripts/get_metric_txt.py --task mnist                     # for mnist
python scripts/get_metric_txt.py --task sprite                    # for sprite
python scripts/get_metric_txt.py --task duke --subtask camera1    # for duke, please run all subtasks from camera1 to camera8

d) Evaluate tracking performances

python -m motmetrics.apps.eval_motchallenge data/mnist/pt result/mnist/tba/default/metric --solver lap      # for mnist
python -m motmetrics.apps.eval_motchallenge data/sprite/pt result/sprite/tba/default/metric --solver lap    # for sprite

To evaluate duke, please upload the file duke.txt (under result/duke/tba/default/metric/) to https://motchallenge.net.

About

A PyTorch implementation of the "Tracking-by-Animation" algorithm published at CVPR 2019.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages