As an example of usage please download a small dataset from here. To run training, you firstly need to create LMDB files. The annotations should be stored in <DATA_DIR>/annotation_train_cvt.json and <DATA_DIR>/annotation_val_cvt.json files.
To create LMDB files go to the '$CAFFE_ROOT/python/lmdb_utils/' directory and run the following scripts:
- Run docker in interactive session with mounted directory with WIDER dataset
nvidia-docker run --rm -it --user=$(id -u) -v <DATA_DIR>:/data ttcf bash
- Convert original annotation to Pascal VOC format for training subset. This way the annotation makes compatible with Caffe SSD tools, required for data LMDB generation.
python3 $CAFFE_ROOT/python/lmdb_utils/convert_to_voc_format.py /data/annotation_train_cvt.json /data/train.txt
- Run bash script to create LMDB:
bash $CAFFE_ROOT/python/lmdb_utils/create_cr_lmdb.sh
- Close docker session by
ctrl+D
and check that you have lmdb files in <DATA_DIR>/lmdb.
On next stage we should train the Person-vehicle-bike crossroad (four class) detection model. To do this follow next steps:
cd ./models
python3 train.py --model crossroad \
--weights person-vehicle-bike-detection-crossroad-0078.caffemodel \
--data_dir <DATA_DIR> \
--work_dir<WORK_DIR> \
--gpu <GPU_ID>
To evaluate the quality of trained Person-vehicle-bike crossroad detection model on your test data you can use provided scripts.
python3 evaluate.py --type cr \
--dir <WORK_DIR>/crossroad/<EXPERIMENT_NUM> \
--data_dir <DATA_DIR> \
--annotation annotation_val_cvt.json \
--iter <ITERATION_NUM>
python3 mo_convert.py --type cr \
--name crossroad \
--dir <WORK_DIR>/crossroad/<EXPERIMENT_NUM> \
--iter <ITERATION_NUM> \
--data_type FP32
You can use this demo to view how resulting model performs.