Namhyuk Ahn, Byungkon Kang, Kyung-Ah Sohn. [arXiv]
- Python 3
- PyTorch (1.0.0), torchvision
- Numpy, Scipy
- Pillow, Scikit-image
- h5py
- importlib
- PerceptualSimilarity
We use the same protocols of CARN, our prior work. Please see the details on this repo.
To test on given image directory,
$ python pcarn/inference_dir.py \
--model pcarn \
--ckpt ./checkpoints/<path>.pth \
--data_root <dataset_root> \
--scale [2|3|4] \
--save_root <sample_dir_root>
More argument details are on the below section.
We provide the pretrained models in the checkpoints
directory. To test the PCARN on benchmark dataset:
# For PCARN and PCARN (L1)
$ python pcarn/inference.py \
--model pcarn \
--ckpt ./checkpoints/<path>.pth \
--data ./dataset/<dataset> \
--scale [2|3|4] \
--sample_dir <sample_dir>
# For PCARN-M and PCARN-M (L1)
$ python pcarn/inference.py \
--model pcarn \
--ckpt ./checkpoints/<path>.pth \
--data ./dataset/<dataset> \
--scale [2|3|4] \
--sample_dir <sample_dir> \
--mobile --group 4
We provide our results on four benchmark dataset (Set5, Set14, B100 and Urban100). Google Drive
Before train the PCARN(-M), models have to be pretrained with L1 loss.
# For PCARN (L1)
python pcarn/main.py \
--model pcarn \
--ckpt_dir ./checkpoints/<save_directory> \
--batch_size 64 --patch_size 48 \
--scale 0 --max_steps 600000 --decay 400000 \
--memo <message_shown_in_logfile>
# For PCARN-M (L1)
python pcarn/main.py \
--model pcarn \
--ckpt_dir ./checkpoints/<save_directory> \
--mobile --group 4 \
--batch_size 64 --patch_size 48 \
--scale 0 --max_steps 600000 --decay 400000 \
--memo <message_shown_in_logfile>
Train the PCARN(-M) using below commands. Note that PerceptualSimilarity has to be ready to evaluate the model performance during training.
# For PCARN
python pcarn/main.py \
--model pcarn \
--ckpt_dir ./checkpoints/<save_directory> \
--perceptual --msd \
--pretrained_ckpt <pretrained_model_path> \
--batch_size 32 --patch_size 48 \
--scale 0 --max_steps 600000 --decay 400000 \
--memo <message_shown_in_logfile>
# For PCARN-M
python pcarn/main.py \
--model pcarn \
--ckpt_dir ./checkpoints/<save_directory> \
--perceptual --msd \
--pretrained_ckpt <pretrained_model_path> \
--mobile --group 4 \
--batch_size 32 --patch_size 48 \
--scale 0 --max_steps 600000 --decay 400000 \
--memo <message_shown_in_logfile>
@article{ahn2019efficient,
title={Efficient Deep Neural Network for Photo-realistic Image Super-Resolution},
author={Ahn, Namhyuk and Kang, Byungkon and Sohn, Kyung-Ah},
journal={arXiv preprint arXiv:1903.02240},
year={2019}
}