Skip to content

[Information Fusion 2023] An Interactively Reinforced Paradigm for Joint Infrared-Visible Image Fusion and Saliency Object Detection

License

Notifications You must be signed in to change notification settings

wdhudiekou/IRFS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

94 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

IRFS

LICENSE Python PyTorch

An interactively reinforced paradigm for joint infrared-visible image fusion and saliency object detection [Information Fusion]

By Di Wang, Jinyuan Liu, Risheng Liu, and Xin Fan*

Updates

[2023-05-17] Our paper is available online! [arXiv version]

Requirements

  • CUDA 10.1
  • Python 3.6 (or later)
  • Pytorch 1.6.0
  • Torchvision 0.7.0
  • OpenCV 3.4
  • Kornia 0.5.11

Dataset

Please download the following datasets:

Infrared and Visible Image Fusion Datasets

RGBT SOD Saliency Datasets

Data preparation

  1. You can obtain self-visual saliency maps for training image fusion by
       cd ./data
       python get_svm_map_softmax.py

Get start

Firstly, you need to download the pretrained model of ResNet-34 and put it into folder './pretrained/'.

  1. You can implement the interactive training of image fusion and SOD. Please check the dataset paths in train_Inter_IR_FSOD.py, and then run:
       cd ./Trainer
       python train_Inter_IR_FSOD.py
  2. You can also train image fusion or SOD separately. Please check the dataset paths in train_fsfnet.py and train_fgccnet.py, and then run:
     ## for image fusion
        cd ./Trainer
        python train_fsfnet.py
     ## for SOD
        cd ./Trainer
        python train_fgccnet.py  

After training, the pretrained models will be saved in folder './checkpoint/'.

  1. You can load pretrained models to evaluate the performance of the IRFS in two tasks (i.e., image fusion, SOD) by running:
       cd ./Test
       python test_IR_FSOD.py      
  2. You can also test image fusion and SOD separately by running:
    ## for image fusion
       cd ./Test
       python test_fsfnet.py
    ## for SOD
        cd ./Test
       python test_fgccnet.py   
    

Noted that, since we alternately train FSFNet and FGC^2Net for a total of 10 times, labeled 0 to 9, therefore, we provide the pre-trained models of the 0th, 1st, 5th, and 9th times.

Please download the pretrained models (code: dxrb) of FSFNet and FGC^2Net, and put them into the folder './checkpoints/Fusion/' and the folder './checkpoints/SOD/'.
If you are not using Baidu Cloud Disk, you can also download the pretrained models from Google Drive.

Experimental Results

  1. Quantitative evaluations of joint thermal infrared-visible image fusion and SOD on VT5000 dataset.
  2. Qualitative evaluations of joint infrared-visible image fusion and SOD on VT5000 dataset.

Any Question

If you have any other questions about the code, please email: diwang1211@mail.dlut.edu.cn

Citation

If this work has been helpful to you, please feel free to cite our paper!

@InProceedings{Wang_2023_IF,
	author = {Di, Wang and Jinyuan, Liu and Risheng Liu and Xin, Fan},
	title = {An interactively reinforced paradigm for joint infrared-visible image fusion and saliency object detection},
	journal = {Information Fusion},
	volume = {98},
	pages = {101828},
	year = {2023}
}

About

[Information Fusion 2023] An Interactively Reinforced Paradigm for Joint Infrared-Visible Image Fusion and Saliency Object Detection

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages