Skip to content

A Large-Scale Dataset for Vehicle Re-Identification in the Wild

Notifications You must be signed in to change notification settings

PKU-IMRE/VERI-Wild

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 

Repository files navigation

VERI-Wild

VERI-Wild: A Large Dataset and a New Method for Vehicle Re-Identification in the Wild

Description of VERI-Wild Dataset

A large-scale vehicle ReID dataset in the wild (VERI-Wild) is captured from a large CCTV surveillance system consisting of 174 cameras across one month (30*24h) under unconstrained scenarios. The cameras are distributed in a large urban district of more than 200km2. The YOLO-v2 [2] is used to detect the bounding box of vehicles. The raw vehicle image set contains 12 million vehicle images, and 11 volunteers are invited to clean the dataset for 1 month. After data cleaning and annotation, 416,314 vehicle images of 40,671 identities are collected. The statistics of VERI-Wild is illustrated in Figure. For privacy issues, the license plates are masked in the dataset. The distinctive features of VERI-Wild are summarized into the following aspects:

Unconstrained capture conditions in the wild The VERI-Wild dataset is collected from a real CCTV camera system consisting of 174 surveillance cameras, in which the unconstrained image capture conditions pose a variety of challenges.

Complex capture conditions The 174 surveillance cameras are distributed in an urban district over 200km2, presenting various backgrounds, resolutions, viewpoints, and occlusion in the wild. In extreme cases, one vehicle appears in more than 40 different cameras, which would be challenging for ReID algorithms.

Large time span involving severe illumination and weather changes The VERI-Wild is collected from a duration of 125, 280 (174x24x30) video hours. Figure (b) gives the vehicle distributions in 4 time slots of 24h, i.e., morning, noon, afternoon, evening across 30 days. VERI-Wild also contains poor weather conditions, such as rainy, foggy, etc, which are not provided in previous datasets.

Rich Context Information We provide rich context information such as camera IDs, timestamp, tracks relation across cameras, which are potential to facilitate the research on behavior analysis in camera networks, like vehicle behavior modeling, cross-camera tracking and graph-based retrieval.

Test

Important!!!!!!!!!

Note that, for VERI-Wild test set, given a query image, you need to remove the images with the same camera id and same vehicle id as query image in the gallery set. They are not considered when computing the mAP and CMC.

Citation

@inproceedings{lou2019large,
 title={VERI-Wild: A Large Dataset and a New Method for Vehicle Re-Identification in the Wild},
 author={Lou, Yihang and Bai, Yan and Liu, Jun and Wang, Shiqi and Duan, Ling-Yu},
 booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
 pages = {3235--3243},
 year={2019}
}
@inproceedings{lou2019large,
 title={Disentangled Feature Learning Network and a Comprehensive Benchmark for Vehicle Re-Identification},
 author={Bai, Yan and Liu, Jun and Lou, Yihang and Wang, Ce and Duan, Ling-Yu},
 booktitle={In IEEE Transactions on Pattern Analysis and Machine Intelligence},
 year={2021}
}

Contactor

Yan Bai, Email: yanbai@pku.edu.cn

To encourage related research, we will provide the dataset according to your request. Please email your full name and affiliation to the contact person (yanbai at pku dot edu dot cn). We ask for your information only to make sure the dataset is used for non-commercial purposes. We will not give it to any third party or publish it publicly anywhere. Due to the privacy issue we will not provide the license plates in the future. If you download our dataset, it means you have agreed to our terms of access in the email.

State-of-the-art Results on the VeRi-Wild Dataset

MethodsSmallMediumLarge
mAP Top1 Top5 mAP Top1 Top5 mAP Top1 Top5
GoogleNet[1] 24.27 57.16 75.13 24.15 53.16 71.1 21.53 44.61 63.55
FDA-Net(VGGM)[2] 35.11 64.03 82.80 29.80 57.82 78.34 22.78 49.43 70.48
MLSL[3] 46.32 - - 42.37 - - 36.61 - -
Triplet(Resnet50) 58.43 65.76 86.98 49.72 57.76 80.86 38.57 47.65 71.66
FDA-Net(Resnet50)[2] 61.57 73.62 91.23 52.69 64.29 85.39 45.78 58.76 80.97
AAVER(Resnet50)[4] 62.23 75.80 92.70 53.66 68.24 88.88 41.68 58.69 81.59
DFLNet(Resnet50)[5] 68.21 80.68 93.24 60.07 70.67 89.25 49.02 61.60 82.73
BS(mobilenet)[6] 70.54 84.17 95.30 62.83 78.22 93.06 51.63 69.99 88.45
UMTS(Resnet50)[7] 72.7 84.5 - 66.1 79.3 - 54.2 72.8 -
Strong baesline(Resnet50)[8] 76.61 90.83 97.29 70.11 87.45 95.24 61.3 82.58 92.73
HPGN(Resnet50+PGN)[9] 80.42 91.37 - 75.17 88.21 - 65.04 82.68 -
GLAMOR(Resnet50+PGN)[10] 77.15 92.13 97.43 - - - - - -
PVEN(Resnet50)[12] 79.8 94.01 98.06 73.9 92.03 97.15 66.2 88.62 95.31
SAVER(Resnet50)[11] 80.9 93.78 97.93 75.3 92.7 97.48 67.7 89.5 95.8
DFNet(Resnet50)[14] 83.09 94.79 98.05 77.27 93.22 97.46 69.85 89.38 96.03

State-of-the-art Results on the VeRi-Wild 2.0 Dataset

MethodsTest Set AllTest Set ATest Set B
mAP Top1 Top5 mAP Top1 Top5 mAP Top1 Top5
Strong Baseline (Resnet50) [8] 34.71 54.37 63.99 32.75 40.12 52.18 42.25 82.72 90.67
GSTE (Resnet50) (w/ bag-of-tricks)[13] 32.57 59.25 64.48 33.01 47.54 50.81 41.82 86.08 91.43
FDA-Net (Resnet50)(w/ bag-of-tricks) [2] 34.21 57.32 64.90 34.63 45.53 52.77 3.93 84.78 92.47
EVER (Resnet50) [41] 36.8 59.1 67.6 36.8 48.7 57.3 45.4 86.1 94.3
PVEN(Resnet50)[12] 37.15 61.19 68.63 38.77 51.28 59.32 45.48 88.05 94.35
SAVER(Resnet50)[11] 38.0 62.1 69.50 39.2 52.3 60.2 45.1 88.1 94.1
DFNet(Resnet50)[14] 39.84 62.21 68.90 40.39 51.68 60.51 46.13 88.56 94.17

[1] Yang, L., Luo, P., Change Loy, C., Tang, X.: A large-scale car dataset for fine-grained categorization and verification. In: IEEE Conference on Computer Visionand Pattern Recognition. (2015)

[2] Lou, Y., Bai, Y., Liu, J., Wang, S., Duan, L.: Veri-wild: A large dataset and a newmethod for vehicle re-identification in the wild. In: IEEE Conference on ComputerVision and Pattern Recognition. (2019)

[3] Alfasly, S., Hu, Y., Li, H., Liang, T., Jin, X., Liu, B., Zhao, Q.: Multi-label-basedsimilarity learning for vehicle re-identification. IEEE Access7(2019)

[4] Pirazh, K., Kumar, A., Peri, N., et al: A dual path modelwith adaptive attentionfor vehicle re-identification. In: IEEE International Conference on Computer Vision(2019)

[5] Yan Bai, Yihang Lou, Yongxing Dai, et al: Disentangled Feature Learning Network for Vehicle Re-Identification. In: IJCAI 2020

[6] Kuma Ratnesh and Weill Edwin and et al: Vehicle re-identification: an efficient baseline using triplet embedding. IN IJCNN 2019

[7] Xin Jin, Cuiling Lan, Wenjun Zeng, Zhibo Chen: Uncertainty-Aware Multi-Shot Knowledge Distillation for Image-Based Object Re-Identification. In: AAAI 2020

[8] Luo Hao and Gu Youzhi and et al:Bag of Tricks and a Strong Baseline for Deep Person Re-Identification. In CVPR workshop 2019.

[9] Shen Fei, Zhu Jianqing and et al: Exploring Spatial Significance via Hybrid Pyramidal Graph Network for Vehicle Re-identification. In arXiv preprint arXiv:2005.14684

[10] Abhijit Suprem and Calton Pu: Looking GLAMORous: Vehicle Re-Id in Heterogeneous Cameras Networks with Global and Local Attention. In arXiv preprint arXiv:2002.02256

[11] Khorramshahi Pirazh, Peri Neehar, Chen Jun-cheng, Chellappa Rama: The Devil is in the Details: Self-Supervised Attention for Vehicle Re-Identification. In ECCV 2020

[12] Meng, Dechao, et al. "Parsing-based view-aware embedding network for vehicle re-identification." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.

[13] Yan Bai, Yihang Lou, Feng Gao, Shiqi Wang, Yuwei Wu, and Lingyu Duan. Group sensitive triplet embedding for vehicle re-identification. IEEE Transactions on Multimedia, 2018.

[14] Yan Bai, Jun Liu, Yihang Lou, Ce Wang, and Lingyu Duan. Disentangled Feature Learning Network and a Comprehensive Benchmark for Vehicle Re-Identification. Tpami 2021.

About

A Large-Scale Dataset for Vehicle Re-Identification in the Wild

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •