AVIID dataset and the code of some representative image-to-image methods for our paper Aerial Visible-to-Infrared Image Translation : Methods, Dataset and Baseline
Our proposed dataset consists of three sub-datasets, named AVIID-1, AVIID-2 and AVIID-3. The AVIID-1 contains 993 pairs of paired visible-infrared images with a resolution of 434
In our paer, we evaluate ten representative image-to-image methods on our AVIID datset, including Pix2Pix, BicycleGAN, CycleGAN, GCGAN, CUT, DCLGAN, UNIT, MUNIT, DRIT and MSGAN. The details of training and testing of these methods in our paer can be seen here.
For Pix2Pix, BicycleGAN, CycleGAN, GCGAN, CUT, DCLGAN, UNIT, and MUNIT:
- Python 3.7 or higher
- Pytorch 1.8.0 or higher, torchvison 0.9.0 or higher
- Tensorboard, TensorboardX, Pyyaml, Pillow, dominate, visdom
For DRIT and MSGAN:
- Python 3.6
- Pytorch 0.4.0, torchvision 0.2.0
- Tensorboard, TensorboardX
Training and Testing are followed by https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix.
Training and Testing are followed by https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix.
Training and Testing are followed by https://github.com/junyanz/BicycleGAN.
Training and Testing are followed by https://github.com/hufu6371/GcGAN.
Training and Testing are followed by https://github.com/JunlinHan/DCLGAN.
Training and Testing are followed by https://github.com/taesungp/contrastive-unpaired-translation.
Training and Testing are followed by https://github.com/mingyuliutw/UNIT.
Training and Testing are followed by https://github.com/NVlabs/MUNIT.
Training and Testing are followed by https://github.com/HsinYingLee/DRIT.
Training and Testing are followed by https://github.com/HelenMao/MSGAN.