An interactively reinforced paradigm for joint infrared-visible image fusion and saliency object detection [Information Fusion]
By Di Wang, Jinyuan Liu, Risheng Liu, and Xin Fan*
[2023-05-17] Our paper is available online! [arXiv version]
- CUDA 10.1
- Python 3.6 (or later)
- Pytorch 1.6.0
- Torchvision 0.7.0
- OpenCV 3.4
- Kornia 0.5.11
Please download the following datasets:
Infrared and Visible Image Fusion Datasets
RGBT SOD Saliency Datasets
- You can obtain self-visual saliency maps for training image fusion by
cd ./data python get_svm_map_softmax.py
Firstly, you need to download the pretrained model of ResNet-34 and put it into folder './pretrained/'.
- You can implement the interactive training of image fusion and SOD. Please check the dataset paths in train_Inter_IR_FSOD.py, and then run:
cd ./Trainer python train_Inter_IR_FSOD.py
- You can also train image fusion or SOD separately. Please check the dataset paths in train_fsfnet.py and train_fgccnet.py, and then run:
## for image fusion cd ./Trainer python train_fsfnet.py ## for SOD cd ./Trainer python train_fgccnet.py
After training, the pretrained models will be saved in folder './checkpoint/'.
- You can load pretrained models to evaluate the performance of the IRFS in two tasks (i.e., image fusion, SOD) by running:
cd ./Test python test_IR_FSOD.py
- You can also test image fusion and SOD separately by running:
## for image fusion cd ./Test python test_fsfnet.py ## for SOD cd ./Test python test_fgccnet.py
Noted that, since we alternately train FSFNet and FGC^2Net for a total of 10 times, labeled 0 to 9, therefore, we provide the pre-trained models of the 0th, 1st, 5th, and 9th times.
Please download the pretrained models (code: dxrb) of FSFNet and FGC^2Net, and put them into the folder './checkpoints/Fusion/' and the folder './checkpoints/SOD/'.
If you are not using Baidu Cloud Disk, you can also download the pretrained models from Google Drive.
- Quantitative evaluations of joint thermal infrared-visible image fusion and SOD on VT5000 dataset.
- Qualitative evaluations of joint infrared-visible image fusion and SOD on VT5000 dataset.
If you have any other questions about the code, please email: diwang1211@mail.dlut.edu.cn
If this work has been helpful to you, please feel free to cite our paper!
@InProceedings{Wang_2023_IF,
author = {Di, Wang and Jinyuan, Liu and Risheng Liu and Xin, Fan},
title = {An interactively reinforced paradigm for joint infrared-visible image fusion and saliency object detection},
journal = {Information Fusion},
volume = {98},
pages = {101828},
year = {2023}
}