Skip to content

Purdue-M2/Fairness-Generalization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Preserving Fairness Generalization in Deepfake Detection

Li Lin, Xinan He, Yan Ju, Xin Wang, Feng Ding, and Shu Hu


This repository is the official implementation of our paper "Preserving Fairness Generalization in Deepfake Detection", which has been accepted by CVPR 2024.

1. Installation

You can run the following script to configure the necessary environment:

cd Fairness-Generalization
conda create -n FairnessDeepfake python=3.9.0
conda activate FairnessDeepfake
pip install -r requirements.txt

2. Dataset Preparation

We share the FF++, Celeb-DF, DFD, DFDC with demographic annotations from paper, which be downloaded through this link.

You can also get those re-annotated four datasets with prediction uncertainty scores through our AI-Face-FairnessBench.

Or you can download these datasets from their official website and process them by following the below steps:

  • Download FF++, Celeb-DF, DFD and DFDC datasets

  • Download annotations for these four datasets according to paper and their code, extract the demographics information of all images in each dataset.

  • Extract, align and crop face using DLib, and save them to /path/to/cropped_images/

  • Split cropped images in each dataset to train/val/test with a ratio of 60%/20%/20% without identity overlap.

  • Generate faketrain.csv, realtrain.csv, fakeval.csv, realval.csv according to the following format:

      |- faketrain.csv
      	|_ img_path,label,ismale,isasian,iswhite,isblack,intersec_label,spe_label
      		/path/to/cropped_images/imgxx.png, 1(fake), 1(male)/-1(not male), 1(asian)/-1(not asian), 1(black)/-1(not black), 1(white)/-1(not white), 0(male-asian)/1(male-white)/2(male-black)/3(male-others)/4(female-asian)/5(female-white)/6(female-black)/7(female-others), 1(Deepfakes)/2(Face2Face)/3(FaceSwap)/4(NeuralTextures)/5(FaceShifter)
      		...
    
      |- realtrain.csv
      	|_ img_path,label,ismale,isasian,iswhite,isblack,intersec_label
      		/path/to/cropped_images/imgxx.png, 0(real), 1(male)/-1(not male), 1(asian)/-1(not asian), 1(black)/-1(not black), 1(white)/-1(not white), 0(male-asian)/1(male-white)/2(male-black)/3(male-others)/4(female-asian)/5(female-white)/6(female-black)/7(female-others)
      		...
    
      |- fakeval.csv
      	|_ img_path,label,ismale,isasian,iswhite,isblack,intersec_label,spe_label
      		/path/to/cropped_images/imgxx.png, 1(fake), 1(male)/-1(not male), 1(asian)/-1(not asian), 1(black)/-1(not black), 1(white)/-1(not white), 0(male-asian)/1(male-white)/2(male-black)/3(male-others)/4(female-asian)/5(female-white)/6(female-black)/7(female-others), 1(Deepfakes)/2(Face2Face)/3(FaceSwap)/4(NeuralTextures)/5(FaceShifter)
      		...
    
      |- realval.csv
      	|_ img_path,label,ismale,isasian,iswhite,isblack,intersec_label
      		/path/to/cropped_images/imgxx.png, 0(real), 1(male)/-1(not male), 1(asian)/-1(not asian), 1(black)/-1(not black), 1(white)/-1(not white), 0(male-asian)/1(male-white)/2(male-black)/3(male-others)/4(female-asian)/5(female-white)/6(female-black)/7(female-others)
      		...
    
  • Generate test.csv according to following format:

      |- test.csv
      	|- img_path,label,ismale,isasian,iswhite,isblack,intersec_label
      		/path/to/cropped_images/imgxx.png, 1(fake)/0(real), 1(male)/-1(not male), 1(asian)/-1(not asian), 1(black)/-1(not black), 1(white)/-1(not white), 0(male-asian)/1(male-white)/2(male-black)/3(male-others)/4(female-asian)/5(female-white)/6(female-black)/7(female-others)
      		...
    

3. Load Pretrained Weights

Before running the training code, make sure you load the pre-trained weights. We provide pre-trained weights under ./training/pretrained. You can also download Xception model trained on ImageNet (through this link) or use your own pretrained Xception.

4. Train

To run the training code, you should first go to the ./training/ folder, then you can train our detector with loss flattening strategy by running train.py, or without loss flattening strategy by running train_noSAM.py:

cd training

python train.py 

You can adjust the parameters in train.py to specify the parameters, e.g., training dataset, batchsize, learnig rate, etc.

--lr: learning rate, default is 0.0005.

--gpu: gpu ids for training.

--fake_datapath: /path/to/faketrain.csv, fakeval.csv

--real_datapath: /path/to/realtrain.csv, realval.csv

--batchsize: batch size, default is 16.

--dataname: training dataset name: ff++.

--model: detector name: fair_df_detector.

5. Test

  • For model testing, we provide a python file to test our model by running python test.py.

    --test_path: /path/to/test.csv

    --test_data_name: testing dataset name: ff++, celebdf, dfd, dfdc.

    --inter_attribute: intersectional group names divided by '-': male,asian-male,white-male,black-male,others-nonmale,asian-nonmale,white-nonmale,black-nonmale,others

    --single_attribute: single attribute name divided by '-': male-nonmale-asian-white-black-others

    --checkpoints: /path/to/saved/model.pth

    --savepath: /where/to/save/predictions.npy(labels.npy)/results/

    --model_structure: detector name: fair_df_detector.

    --batch_size: testing batch size: default is 32.

  • After testing, for metric calculation, we provide python fairness_metrics.py to print all the metrics. To be noted that before run metrics.py, adjust the input to the path of your predictions(labels).npy files, which is the --savepath in the above setting.

📝 Note

Change --inter_attribute and --single_attribute for different testing dataset:

### ff++, dfdc
--inter_attribute male,asian-male,white-male,black-male,others-nonmale,asian-nonmale,white-nonmale,black-nonmale,others \
--single_attribute male-nonmale-asian-white-black-others \

### celebdf, dfd
--inter_attribute male,white-male,black-male,others-nonmale,white-nonmale,black-nonmale,others \
--single_attribute male-nonmale-white-black-others \

📦 Provided Backbones

File name Paper
Xception xception.py Xception: Deep learning with depthwise separable convolutions
ResNet50 resnet50.py Deep Residual Learning for Image Recognition
EfficientNet-B3 efficientnetb3.py Efficientnet: Rethinking model scaling for convolutional neural networks
EfficientNet-B4 efficientnetb4.py Efficientnet: Rethinking model scaling for convolutional neural networks

Citation

Please kindly consider citing our papers in your publications.

@inproceedings{Li2024preserving,
    title={Preserving Fairness Generalization in Deepfake Detection},
    author={Li Lin, Xinan He, Yan Ju, Xin Wang, Feng Ding, Shu Hu},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2024},
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages