H. Xie, Y. Zhang, J. Qiu, X. Zhai, X. Liu, Y. Yang, S. Zhao, Y. Luo, and J. Zhong, “Semantics lead all: Towards unified image registration and fusion from a semantic perspective,” Information Fusion, p. 101835, 2023. Paper
We have updated the existing bugs in the original code. Please download the current project and weights again for testing and training.【07/10】
- Download the COCO dataset to
.\datasets\COCO\
(path2COCO) - Download the IVS dataset to
.\datasets\IVS\
(path2IVS) - Download the label of IVS dataset to
.\datasets\IVS_Label\
(path2IVS_Label) - Generate pseudo-infrared images for each image in the COCO dataset using CPSTN and store the results in
.\datasets\COCO_CPSTN\
(path2COCO_CPSTN) - Generate pseudo-infrared images for each image in the IVS dataset using CPSTN and store the results in
.\datasets\IVS_CPSTN\
((path2IVS_CPSTN))
The code is implemented in python=3.6
, as well as pytorch=1.9
and opencv-python=4.6.0.66
. Please follow the instructions here to install both PyTorch dependencies. Installing PyTorch with CUDA support is strongly recommended.
-
Train stage1: Registration and semantic feature extraction.
cd train_stage1
and configuring dataset paths, then runpython train_stage1.py
-
Train stage2: Training CSC and SSR modules.
cd train_stage2
and configuring dataset paths, then runpython train_stage2.py
-
Train stage3: Training fusion module.
cd train_stage3
and configuring dataset paths, then runpython train_stage3.py
Download pre-trained models on Google Drive or Baidu Yun and configure the path reg_weight_path
, fusion_weight_path
. We provide two matching modes, one is semantic object-oriented matching, setting matchmode = "semantic"
, and the other is global image oriented matching, setting matchmode = "scene"
.
Configuring dataset paths, then run python test.py
Configuring images path, then run python inference_one_pair_images.py
If this code is useful for your research, please cite our paper.
@article{xie2023semantics,
title={Semantics lead all: Towards unified image registration and fusion from a semantic perspective},
author={Xie, Housheng and Zhang, Yukuan and Qiu, Junhui and Zhai, Xiangshuai and Liu, Xuedong and Yang, Yang and Zhao, Shan and Luo, Yongfang and Zhong, Jianbo},
journal={Information Fusion},
pages={101835},
year={2023},
publisher={Elsevier}
}