This repository provides the implementation of our paper "Exploring the Potential of Synthetic Data for Pedestrian Analysis" delivered for the "Computer Vision and Cognitive System" course @UNIMORE
N.B.: Installation only avaiable in win64 environments
Create and activate an environment with all required packages:
conda create --name pedestrian_detector --file deps/wins/conda_requirements.txt
conda activate pedestrian_detector
pip install -r deps/win/pip_requirements.txt
- Download MOTSynth_1.
wget -P ./storage/MOTSynth https://motchallenge.net/data/MOTSynth_1.zip
unzip ./storage/MOTSynth/MOTSynth_1.zip -d ./storage/MOTSynth/
rm ./storage/MOTSynth/MOTSynth_1.zip
- Delete video from 123 to 256
- Extract frames from the videos
python tools/anns/to_frames.py --motsynth-root ./storage/MOTSynth
# now you can delete other videos
rm -r ./storage/MOTSynth/MOTSynth_1
- Download and extract annotations
wget -P ./storage/MOTSynth https://motchallenge.net/data/MOTSynth_coco_annotations.zip
unzip ./storage/MOTSynth/MOTSynth_coco_annotations.zip -d ./storage/MOTSynth/
rm ./storage/MOTSynth/MOTSynth_coco_annotations.zip
- Prepare combined annotations for MOTSynth from the original COCO annotations
python tools/anns/combine_anns.py --motsynth-path ./storage/MOTSynth --split motsynth_split3
- Prepare reid images
python tools/anns/store_reid_imgs.py --ann-path ./storage/MOTSynth/comb_annotations/motsynth_split3.json --frames-path ./storage/MOTSynth
- Prepare motsynth ouput dir for training results
mkdir ./storage/motsynth_output
- Download MOT17
wget -P ./storage/MOTChallenge https://motchallenge.net/data/MOT17.zip
unzip ./storage/MOTChallenge/MOT17.zip -d ./storage/MOTChallenge
rm ./storage/MOTChallenge/MOTSynth_1.zip
- Generate COCO format annotations
python ./tools/anns/motcha_to_coco.py --data-root storage/MOTChallenge --dataset MOT17 --split train
- Generate reid images
python tools/anns/store_reid_imgs.py --ann-path ./storage/MOTChallenge/motcha_coco_annotations/MOT17-train.json
You can find all pretrained models for detection and reid here (download them and paste the .pth files in storage/pretrained_models directory).
After runnning this step, your storage directory should look like this:
storage
├── MOTChallenge
├── MOT17
├── motcha_coco_annotations
├── MOTSynth
├── annotations
├── comb_annotations
├── frames
├── motsynth_output
├── pretrained_models
- Detection
See docs/DETECTOR.md - Tracking
See docs/TRACKER.md - Retrieval/Re-id
See docs/REID.md - Distance violation detector
See docs/DISTANCE_VIOLATION_DETECTOR.md
Some images, videos and plots are available here
- Sirri Matteo
- email: 254179@studenti.unimore.it
- Manghi Ilaria
- email: 244770@studenti.unimore.it
- Riccardo Benini
- email: 244321@studenti.unimore.it
This codebase is built on top of several great works. Our detection and ReID code is based on the MOTSynth_baseline. For MOT, we use BYTE algorithm. We thank all the authors of these codebases for their amazing work.