Skip to content

Vibashan/Image-Fusion-Transformer

Repository files navigation

Image-Fusion-Transformer

Framework: PyTorch

Vibashan VS, Jeya Maria Jose, Poojan Oza, Vishal M Patel

[Peronal Page] [ICIP] [pdf] [BibTeX]

Platform

Python 3.7
Pytorch >=1.0

Training Dataset

MS-COCO 2014 (T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014. 3-5.) is utilized to train our auto-encoder network.

KAIST (S. Hwang, J. Park, N. Kim, Y. Choi, I. So Kweon, Multispectral pedestrian detection: Benchmark dataset and baseline, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1037–1045.) is utilized to train the RFN modules.

The testing datasets are included in "analysis_MatLab".

Training Command:

python train_fusionnet_axial.py

Testing Command:

python test_21pairs_axial.py

The Fusion results are included in "analysis_MatLab".

If you have any questions about the code, feel free to contact me at vvishnu2@jh.edu.

Acknowledgement

This codebase is built on top of RFN-Nest by Li Hui.

Citation

If you found IFT useful in your research, please consider starring ⭐ us on GitHub and citing 📚 us in your research!

@inproceedings{vs2022image,
  title={Image fusion transformer},
  author={Vs, Vibashan and Valanarasu, Jeya Maria Jose and Oza, Poojan and Patel, Vishal M},
  booktitle={2022 IEEE International Conference on Image Processing (ICIP)},
  pages={3566--3570},
  year={2022},
  organization={IEEE}
}