In recent months, the proliferation of freely available deep learning tools has facilitated the creation of highly convincing deepfake videos, where faces are swapped in videos to create misleading content. While video manipulations have been around for some time, advancements in deep learning have significantly boosted the realism of such fake content, making it easier to generate. However, detecting these manipulated videos remains a formidable challenge.
To tackle this issue, we have devised a novel approach that combines Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) technologies. Our system utilizes CNNs to extract features at the frame level, which are then employed to train an RNN. This RNN is trained to discern whether a video has been manipulated by identifying temporal inconsistencies introduced by deepfake generation tools. Our method has demonstrated competitive performance against a broad range of fake videos sourced from standard datasets, showcasing the effectiveness of our approach with a relatively simple architecture.
Deepfake video detection is an essential endeavor in safeguarding against the spread of misinformation and preserving the integrity of visual content in various domains, including journalism, entertainment, and social media. By deploying advanced neural network architectures like CNNs and RNNs, researchers and developers are continuously striving to enhance detection techniques to combat the growing threat posed by deepfake technology.
- PyTorch
- Numpy
- Pandas
- Matplotlib
- Face recognition
- OpenCV
GNU General Public License v3.0
Project.demo.mp4
A project associated with the coursework at Rutgers University, New Jersey.