Image forensics has witnessed significant growth in recent years, driven by advancements in computer vision and the surge of digital data. Ensuring the authenticity of images has become a top priority, as sophisticated manipulation techniques continue to emerge. We propose a multi-modal approach to gain insight into the image's authenticity.
You can try the live application here.
Live Demo
Clone the repo
git clone https://github.com/jayant1211/Image-Tampering-Detection-using-ELA-and-Metadata-Analysis.git
cd Image-Tampering-Detection-using-ELA-and-Metadata-Analysis/
to install all dependencies, create a new virtualenv, and install all required packages as:
pip install -r reuqirements
Usage:
Keep the model in ELA_Training Folder
run streamlit run app.py
for local inference
We are using ELA and Metadata Analysis to achieve insight into the authenticity of an image
when a lossy algorithm like JPEG compresses an image, the compression process introduces artifacts or discrepancies in it. these can appear as blocks or regions within an image, exhibiting pixel values that differ from those of the surrounding areas. when an image goes under manipulation, compression artifacts are disrupted for the tampered region.
in ELA, we calculate the absolute mean of an image at different compression levels:
ELA
by doing this, we are essentially amplifying the variations caused by compression artifacts.ELA for fake image
The CASIA2.0 dataset contains a set of real and tampered images, we have used this dataset, and it is pre-processed to produce the ELA of every image(optimal image quality for compression level for calculating absolute diff was 90%). This preprocessed dataset is then trained on DenseNet121.image contains a lot of metadata with it, say, camera model, date, time, location, etc. By 'weather validation' to gain insight into the authenticity of an image, we mean precisely validating the depicted weather. A trained Weather CNN detects weather depicted in an image(preferably outdoor), and this result of Weather CNN is validated using Historical weather data. for fetching weather data all you need is a good open-source weather database, place, date, and time. Using metadata analysis, we could extract longitude and latitude, as well as the date and time. then parsing this metadata, we can send a request to weather-API to get the original weather on that place on a given date and time and validate our weather-CNN's result.
The dataset for training weather-CNN was collected from various sources. We have collected a total of 1,804 training images and 451 validation images, and the categories we narrowed down for the classification are the following:
- Lighting
- rainy
- cloudy
- sunny
In case you want to retrain the ELA models, download the CASIA2.0 Dataset and put it inside ELA_Training and run main.ipynb
. If you want to access the weather dataset, you can contact me.
For ELA with DenseNet, using standard practices for training and optimizing the model, the accuracies model achieved were:
Metric | Accuracy |
---|---|
Train Accuracy | 98.34% |
Validation Accuracy | 93.78% |
Test Accuracy | 87.24% |
For Weather CNN:
Metric | Accuracy |
---|---|
Train Accuracy | 91.2% |
Validation Accuracy | 81.6% |
Test Accuracy | 73.4% |
- Use scene classification model to remove user dependency for checking whether the image is outdoor or not. (In progress)
- Integration of Web-Traces and more modalities to Improve upon the Results.
If you use our study in your research, please consider citing us, Thanks:
@INPROCEEDINGS{10169948,
author={Madake, Jyoti and Meshram, Jayant and Mondhe, Ajinkya and Mashalkar, Pruthviraj},
booktitle={2023 4th International Conference for Emerging Technology (INCET)},
title={Image Tampering Detection Using Error Level Analysis and Metadata Analysis},
year={2023},
volume={},
number={},
pages={1-7},
doi={10.1109/INCET57972.2023.10169948}}