Skip to content

Dual-Branch Meta-learning Network with Distribution Alignment for Face Anti-spoofing, Transactions on Information Forensics & Security

Notifications You must be signed in to change notification settings

taylover-pei/DBMNet-TIFS

Repository files navigation

DBMNet

The pytorch implementation of Dual-Branch Meta-learning Network with Distribution Alignment for Face Anti-spoofing.

The motivation of the proposed DBMNet method:

The network architecture of the proposed DBMNet method:

An overview of the proposed DBMNet method:

Congifuration Environment

  • python 3.6
  • pytorch 0.4
  • torchvision 0.2
  • cuda 8.0

Pre-training

Dataset.

Download the OULU-NPU, CASIA-FASD, Idiap Replay-Attack, MSU-MFSD, 3DMAD, and HKBU-MARs datasets.

Face Detection and Face Alignment.

MTCNN algorithm is utilized for face detection and face alignment. All the detected faces are normalized to 256$\times$256$\times$3, where only RGB channels are utilized for training. The exact codes that we used can be found here.

Put the processed frames in the path $root/mtcnn_processed_data/

To be specific, we utilize the MTCNN algorithm to process every frame of each video and then utilize the sample_frames function in the utils/utils.py to sample frames during training.

Label Generation.

Take the DBMNet_ICM_to_O experiment for example. Move to the folder $root/cross_database_testing/DBMNet_ICM_to_O/data_label and generate the data label list:

python generate_label.py

Training

Move to the folder $root/cross_database_testing/DBMNet_ICM_to_O and just run like this:

python train_DBMNet.py

The file config.py contains the hype-parameters used during training.

Testing

Move to the folder $root/cross_database_testing/DBMNet_ICM_to_O and run like this:

python dg_test.py

Supplementary

In this section, we provide a more detailed supplement to the experiment of our original paper.

Experimental Setting

Since most of the methods focusing on the cross-database face anti-spoofing do not point out how exactly the threshold is tuned and how the HTER is calculated, we provide two sets of comparisons, i.e., the idealized comparisons (the results are not shown in our paper) and the more realistic comparisons (the results shown in our paper).

For the idealized setting, we obtain the threshold by computing the EER directly on the target testing set, and then, compute the HTER based on the threshold. For the realistic setting, we first compute the EER on the source validation set to get the threshold. And then, we utilize the threshold to compute FAR and FRR based on the target testing set. Finally, HTER is calculated by the means of FAR and FRR.

Testing Task Training Set Validation Set Testing Set
O&C&I to M of idealized comparisons The original training set and testing set of OULU, CASIA, and Replay databases None The original training set and testing set of MSU database
O&C&I to M of more realistic comparisons The original training set and testing set of OULU and Replay databases, the original training set of CASIA database The original validation set of OULU and Replay databases, the original testing set of CASIA database The original training set and testing set of MSU database

To be specific, in the idealized comparisons, we reproduce the state-of-the-art RFMetaFAS and SSDG method, tuning the threshold directly on the target testing set. As shown the first line of the above table, we list the training set, validation set, and testing set of the O&C&I to M testing task in the idealized comparisons. Moreover, in the more realistic comparisons, we re-run the state-of-the-art methods, i.e., RFMetaFAS and SSDG, tuning the threshold on the source validation set. As shown the second line of the above table, we list the training set, validation set, and testing set of the O&C&I to M testing task in the more realistic comparisons.

In the subsections blow, we provide full comparison results relating to the idealized setting as well as the more realistic setting.

Cross Database Testing.

The comparison results of the idealized setting:

Method O&C&I to M O&M&I to C O&C&M to I I&C&M to O
HTER(%) AUC(%) HTER(%) AUC(%) HTER(%) AUC(%) HTER(%) AUC(%)
RFMetaFAS 8.57 95.91 11.44 94.47 16.5 91.72 17.78 89.89
SSDG-R 7.38 97.17 10.44 95.94 11.71 96.59 15.61 91.54
DBMNet 4.52 98.78 8.67 96.52 10 96.28 11.42 95.14

The comparison results of the more realistic setting:

Method O&C&I to M O&M&I to C O&C&M to I I&C&M to O
HTER(%) AUC(%) HTER(%) AUC(%) HTER(%) AUC(%) HTER(%) AUC(%)
RFMetaFAS 10.24 95.2 18.67 93.69 22.64 75.43 19.39 88.75
SSDG-R 11.19 93.69 14.78 94.74 16.64 91.93 24.29 88.72
DBMNet 7.86 96.54 14 94.58 16.42 90.88 17.59 90.92

Cross Database Testing of Limited Source Domains.

The comparison results of the idealized setting:

Method M&I to C M&I to O
HTER(%) AUC(%) HTER(%) AUC(%)
RFMetaFAS 29.33 74.03 33.19 74.63
SSDG-R 18.67 86.67 23.19 84.73
DBMNet 16.78 89.6 20.56 88.33

The comparison results of the more realistic setting:

Method M&I to C M&I to O
HTER(%) AUC(%) HTER(%) AUC(%)
RFMetaFAS 22.56 84.37 35.73 77.65
SSDG-R 18.89 91.35 26.44 78.14
DBMNet 16.89 90.65 23.73 84.33

Cross Database Cross Type Testing.

The comparison results of the idealized setting:

Method O&C&I&M to D O&C&I&M to H
HTER(%) AUC(%) HTER(%) AUC(%)
RFMetaFAS 5.88 98.94 25.38 65.69
SSDG-R 5.88 98.54 24.77 66.50
DBMNet 0.88 99.97 10.46 96.74

The comparison results of the more realistic setting:

Method O&C&I&M to D O&C&I&M to H
HTER(%) AUC(%) HTER(%) AUC(%)
RFMetaFAS 5.88 98.35 41.67 81.54
SSDG-R 6.77 98.42 32.50 73.68
DBMNet 0.59 99.50 20.83 92.26

Intra Database Testing.

The comparison results of the idealized setting:

Method O&C&I&M
HTER(%) AUC(%)
RFMetaFAS 2.82 99.45
SSDG-R 0.89 99.90
DBMNet 0.08 99.99

The comparison results of the more realistic setting:

Method O&C&I&M
HTER(%) AUC(%)
RFMetaFAS 3.00 99.62
SSDG-R 1.57 99.78
DBMNet 1.48 99.83

Conclusion.

As can be seen, all the results of the RFMetaFAS, SSDG, and our DBMNet degrade when a more realistic model training process is performed. The possible reason lies in two aspects: firstly, the threshold is tuned on the source validation set instead of the target testing set; secondly, there are fewer training samples than before shown in the first table. As a result, there is still a long way to go in the research of cross-database face anti-spoofing for better generalization.

To be specific, we only release the code and provide the comparison results in our paper based on the more realistic setting.

Citation

Please cite our paper if the code is helpful to your research.

@ARTICLE{9646915,  
   author={Jia, Yunpei and Zhang, Jie and Shan, Shiguang},  
   journal={IEEE Transactions on Information Forensics and Security},   
   title={Dual-Branch Meta-Learning Network With Distribution Alignment for Face Anti-Spoofing},   
   year={2022},  
   volume={17},
   pages={138-151},  
   doi={10.1109/TIFS.2021.3134869}}

About

Dual-Branch Meta-learning Network with Distribution Alignment for Face Anti-spoofing, Transactions on Information Forensics & Security

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages