Implementation of our work:
Jinyuan Liu*, Runjia Lin*, Guanyao Wu, Risheng Liu, Zhongxuan Luo, and Xin Fan📭, "CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature Ensemble for Multi-modality Image Fusion", International Journal of Computer Vision (IJCV), 2024.
- Check out our recent related works 🆕:
-
🔥 ICCV'23 Oral: Multi-interactive Feature Learning and a Full-time Multi-modality Benchmark for Image Fusion and Segmentation [paper] [code]
-
🔥 CVPR'22 Oral: Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [paper] [code]
-
🔥 IJCAI'23: Bi-level Dynamic Learning for Jointly Multi-modality Image Fusion and Beyond [paper] [code]
-
Clone repo:
git clone https://github.com/runjia0124/CoCoNet.git
cd CoCoNet
The code is tested with Python == 3.8, PyTorch == 1.9.0 and CUDA == 11.1 on NVIDIA GeForce RTX 2080, you may use a different version according to your GPU.
conda create -n coconet python=3.8
conda activate coconet
pip install -r requirements.txt
bash ./scripts/test.sh
or
python main.py \
--test --use_gpu \
--test_vis ./TNO/VIS \
--test_ir ./TNO/IR
To work with your own test set, make sure to use the same file names for each infrared-visible image pair if you prefer not to edit the code.
Get training data from [Google Drive]
python -m visdom.server
python main.py --train --c1 0.5 --c2 0.75 --epoch 30 --bs 30 \
--logdir <checkpoint_path> --use_gpu
python main.py --finetune --c1 0.5 --c2 0.75 --epoch 2 --bs 30 \
--logdir <checkpoint_path> --use_gpu
If you have any questions about the code, please email us or open an issue,
Runjia Lin(linrunja@gmail.com
) or Jinyuan Liu (atlantis918@hotmail.com
).
If you find this paper/code helpful, please consider citing us:
@article{liu2023coconet,
title={Coconet: Coupled contrastive learning network with multi-level feature ensemble for multi-modality image fusion},
author={Liu, Jinyuan and Lin, Runjia and Wu, Guanyao and Liu, Risheng and Luo, Zhongxuan and Fan, Xin},
journal={International Journal of Computer Vision},
pages={1--28},
year={2023},
publisher={Springer}
}