This is the codebase for HARNESSING WAVELET TRANSFORMATIONS FOR GENERALIZABLE DEEPFAKE FORGERY DETECTION
This repository is based on SCLBD/DeepfakeBench. Utilizing their framework, we have successfully replicated existing benchmarks while introducing a new model, Wavelet-CLIP, which is based on self-supervised training. Our model achieves state-of-the-art performance on the CDFv1, CDFv2, and Fsh datasets.
To install the required dependencies and set up the environment, run the following command in your terminal:
sh install.sh
All datasets are sourced from the DeepFakeBench repository, originally obtained from official websites. We are only releasing the preprocessed test sets; to access and preprocess the training sets, please look at the DeepFakeBench repository and follow the same procedure.
Dataset Name | Link |
---|---|
Celeb-DF-v1 | - |
Celeb-DF-v2 | - |
FaceShifter | - |
Cross-Data Performance: To reproduce the results, use the provided train.py
script. For specific detectors, download them from link and update the path in ./training/config/detector/detector.yaml
. An example command to train on CDFv1 & CDFv2 datasets, might look like this:
python3 -m torch.distributed.launch --nproc_per_node=4 training/train.py --detector_path ./training/config/detector/detector.yaml --train_dataset "FaceForensics++" --test_dataset "Celeb-DF-v1" "Celeb-DF-v2" --task_target "clip_wavelet" --no-save_feat --ddp
To reproduce the results, use the provided test.py
script. For specific detectors, download them from link and update the path in ./training/config/detector/detector.yaml
. An example command to train on CDFv1 & CDFv2 datasets, might look like this:
python3 training/test.py --detector_path ./training/config/detector/clip_wavelet.yaml --test_dataset "Celeb-DF-v1" "Celeb-DF-v2" "FaceShifter" --weights_path ./training/weights/clip_wavelet_best.pth
Robustness to Unseen Deepfakes: To reproduce the results, use the provided gen_test.py
script. For specific detectors, download them from link and update the path in ./training/config/detector/clip.wavelet.yaml
. An example command to train on CDFv1 & CDFv2 datasets, might look like this:
python3 training/gen_test.py --detector_path ./training/config/detector/clip_wavelet.yaml --test_dataset "DDIM" "DDPM" "LDM" --weights_path ./training/weights/clip_wavelet_best.pth
Model | Venue | Backbone | Protocol | CDFv1 | CDFv2 | Fsh | Avg |
---|---|---|---|---|---|---|---|
CLIP | CVPR-23 | ViT | Self-Supervised | 0.743 | 0.750 | 0.730 | 0.747 |
Wavelet-CLIP (ours) | - | ViT | Self-Supervised | 0.756 | 0.759 | 0.732 | 0.749 |
Model | DDPM | DDIM | LDM | Avg. | ||||
---|---|---|---|---|---|---|---|---|
AUC | EER | AUC | EER | AUC | EER | AUC | EER | |
Xception | 0.712 | 0.353 | 0.729 | 0.331 | 0.658 | 0.309 | 0.699 | 0.331 |
CapsuleNet | 0.746 | 0.314 | 0.780 | 0.288 | 0.777 | 0.289 | 0.768 | 0.297 |
Core | 0.584 | 0.453 | 0.630 | 0.417 | 0.540 | 0.479 | 0.585 | 0.450 |
F3-Net | 0.388 | 0.592 | 0.423 | 0.570 | 0.348 | 0.624 | 0.386 | 0.595 |
MesoNet | 0.618 | 0.416 | 0.563 | 0.465 | 0.666 | 0.377 | 0.615 | 0.419 |
RECCE | 0.549 | 0.471 | 0.570 | 0.463 | 0.421 | 0.564 | 0.513 | 0.499 |
SRM | 0.650 | 0.393 | 0.667 | 0.385 | 0.637 | 0.397 | 0.651 | 0.392 |
FFD | 0.697 | 0.359 | 0.703 | 0.354 | 0.539 | 0.466 | 0.646 | 0.393 |
MesoInception | 0.664 | 0.372 | 0.709 | 0.339 | 0.684 | 0.353 | 0.686 | 0.355 |
SPSL | 0.735 | 0.320 | 0.748 | 0.314 | 0.550 | 0.481 | 0.677 | 0.372 |
CLIP | 0.781 | 0.292 | 0.879 | 0.203 | 0.876 | 0.210 | 0.845 | 0.235 |
Wavelet-CLIP | 0.897 | 0.190 | 0.886 | 0.197 | 0.897 | 0.190 | 0.893 | 0.192 |
@article{baru2024harnessing,
title={Harnessing Wavelet Transformations for Generalizable Deepfake Forgery Detection},
author={Baru, Lalith Bharadwaj and Patel, Shilhora Akshay and Boddeda, Rohit},
journal={arXiv preprint arXiv:2409.18301},
year={2024}
}