Image Forgery Detection using Deep Learning, implemented in PyTorch.
The whole framework: An RGB image, firstly, is divided into overlapping patches (64x64). Then, RGB patches are converted to the YCrCb color channel, before being scored by a network. Lastly, a post-processing stage is designed to refine predictions of the network and make a final conclusion on the authentication of the image.
The deep neural network is adapted from MobileNet-V2. However, we modify the original MobileNet-V2 to be more relevant to our problem. The picture below depicts the architecture modification.
We have conducted a comprehensive evaluation on model configurations to show which factor improves the final performance of the model. To figure out this, we define six configurations accompanied with the MobileNetV2, denoted as MBN2, as the core. There are two color channels to be considered, namely RGB and YCrCb. Besides, three MobileNetV2 architectures are taken into account for comparing. The first architecture is MobileNetV2 trained from scratch, the second one is MobileNetV2 initialized with pre-trained weights from ImageNet, and the last one is modified MobileNetV2 trained from scratch.
If you find this work useful, please cite:
@article{
title={Preserving Spatial Information to Enhance Performance of Image Forgery Classification},
author={Hanh Phan-Xuan, Thuong Le-Tien, Thuy Nguyen-Chinh, Thien Do-Tieu, Qui Nguyen-Van, and Tuan Nguyen-Thanh},
journal={International Conference on Advanced Technologies for Communications (ATC)},
year={2019}
}