This repository is the official implementation of our paper AI-Face: A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark
The AI-Face Dataset is licensed under CC BY-NC-ND 4.0
If you would like to access the AI-Face Dataset, please download and sign the EULA. Please upload the signed EULA to the Google Form and fill the required details. Once the form is approved, the download link will be sent to you. If you have any questions, please send an email to lin1785@purdue.edu, hu968@purdue.edu
You can run the following script to configure the necessary environment:
cd AI-Face-FairnessBench
conda create -n FairnessBench python=3.9.0
conda activate FairnessBench
pip install -r requirements.txt
After getting our AI-Face dataset, put the provided train.csv
and test.csv
within AI-Face dataset under ./dataset
.
train.csv and test.csv is formatted:
Column | Description |
---|---|
Image Path | Path to the image file |
Uncertainty Score Gender | Uncertainty score for gender annotation |
Uncertainty Score Age | Uncertainty score for age annotation |
Uncertainty Score Race | Uncertainty score for race annotation |
Ground Truth Gender | Gender label: 1 - Male, 0 - Female |
Ground Truth Age | Age label: 0 - Young, 1 - Middle-aged, 2 - Senior, 3 - Others |
Ground Truth Race | Race label: 0 - Asian, 1 - White, 2 - Black, 3 - Others |
Intersection | 0-(Male,Asian), 1-(Male,White), 2-(Male,Black), 3-(Male,Others), 4-(Female,Asian), 5-(Female,White), 6-(Female,Black), 7-(Female,Others) |
Target | Label indicating real (0) or fake (1) image |
Our AI-Face dataset contains face images from four deepfake video datasets: FF++, Celeb-DF, DFD and DFDC. You can access these datasets with demongraphic annotaions from paper through the link provided in our Fairness-Generalization repository. Please be aware that we re-annotated demographic attributes for those four deepfake video datasets in our AI-Face dataset, and the demographic annotations are provided with uncertainty score formatted in a CSV file as described above. The annotations you can acquire through our Fairness-Generalization are different with those provided in our AI-Face dataset, and they are not accompained with uncertianty scores.
After you get the download link for the AI-Face dataset, you will see part1.tar
and part2.tar
. Please download both parts if you are going to use the entire dataset. They are uploaded in two parts because One Drive only allows files not larger than 250GB.
Ensure your device has 300GB of available space for this dataset.
- Download
part1.tar
andpart2.tar
. - Untar both files.
- Organize the data as shown below:
AI-Face Dataset
├── AttGAN
├── Latent_Diffusion
├── Palette
├── ...
├── ...
Before running the training code, make sure you load the pre-trained weights. We provide pre-trained weights under ./training/pretrained
. You can also download Xception model trained on ImageNet (through this link) or use your own pretrained Xception.
To run the training code, you should first go to the ./training/
folder, then run train_test.py
:
cd training
python train_test.py
You can adjust the parameters in train_test.py
to specify the parameters, e.g., model, batchsize, learning rate, etc.
--lr
: learning rate, default is 0.0005.
--train_batchsize
: batchsize for training, default is 128.
--test_batchsize
: batchsize for testing, default is 32.
--datapath
: /path/to/dataset
.
--model
: detector name ['xception', 'efficientnet', 'core', 'ucf', 'srm', 'f3net', 'spsl', 'daw_fdd', 'dag_fdd', 'fair_df_detector'], default is 'xception'.
--dataset_type
: dataset type loaded for detectors, default is 'no_pair'. For 'ucf' and 'fair_df_detector', it should be 'pair'.
To train ViT-b/16 and UnivFD, please run train_test_vit.py
and train_test_clip.py
, respectively.
If you use the AI-face dataset in your research, please cite our paper as:
@article{lin2024aiface,
title={AI-Face: A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark},
author={Li Lin and Santosh and Xin Wang and Shu Hu},
journal={arXiv preprint arXiv:2406.00783},
year={2024}
}