This repository is for DRUDN presented in the following paper
Z. Li, Q. Li, W. Wu, et al., "Deep recursive up-down sampling networks for single image super-resolution", Neurocomputing, 2019 (Accepted). [Link]
The code is Inspired by BasicSR and is tested on Ubuntu 16.04 environment (Python3.6, PyTorch_0.4.0, CUDA9.0, cuDNN5.1) with NVIDIA 1080Ti GPU.
Single image super-resolution (SISR) technology can reconstruct a high-resolution (HR) image from the corresponding low-resolution (LR) image. In this paper, we propose the deep recursive up-down sampling networks (DRUDN) for SISR. In DRUDN, an original LR image is directly fed without extra interpolation. Then, we use the sophisticated recursive up- down sampling blocks (RUDB) to learn the complex mapping between the LR image and the HR image. At the reconstruction part, the feature map is up-scaled to the ideal size by a de-convolutional layer. Extensive experiments demonstrate that DRUDN outperforms the state-of-the-art methods in both subjective effects and objective evaluation.
-
Prepare test data. Download test sets (e.g., Set5, other test sets are available from GoogleDrive) Run
eval/Prepare_TestData_HR_LR.m
in Matlab to generate HR/LR images with different scales. (i.e. 2, 3, 4). -
Conduct image SR as Test.
-
Run
eval/Evaluate_PSNR_SSIM.m
to obtain PSNR/SSIM values for paper.
-
Download models for our paper and place them in
release
folder. We release 3 model for x2, x3, x4, respectively. These can be downloaded from Google Drive and Onedrive. -
Modify
options/test/test_drudn.json
to specify your own test options. Pay particular attention to the follow options:scale
the up-scale between LR images and HR images.dataroot_HR
the path of HR dataset for train or validationdataroot_LR
: the path of HR dataset for train or validation Pay particular attention, thescale
must match thedataroot_LR
. -
Cd to the root folder and run the follow command:
python test.py -opt options/test/test_drudn.json
Then, bomb, you can get the SR images in the
eval/SR/BI/DRRN
folder.
-
Download DIV2K training data (800 training images) from DIV2K dataset or SNU_CVLab.
-
Modify and run
scripts/extract_subimgs.py
to crop the training data into patches. The default setting of the patches size is480 * 480
and the stride is 240. After doing executing, a folder namedDIV2K_HR_sub
with 32202 patches can be obtained. -
Modify and run
scripts/generate_mod_LR_bic.m
to downsample theDIV2K_HR_sub
into LR dataset. Forup-scale=2
, a folder namedDIV2K_HR_sub_LRx2
with images sized of240 * 240
can be obtained. The same happens forup-scale=3
andup-scale=4
.
-
Modify
options/train/train_drudn.json
to specify your own train options. Pay particular attention to the follow options:scale
the up-scale between LR images and HR images.dataroot_HR
the path of HR dataset for train or validationdataroot_LR
: the path of HR dataset for train or validationPay particular attention, the
up-scale
must match thedataroot_LR
.More settings such as
n_workers
(threads),batch_size
and other hy-parameters are set to default. You may modify them for your own sack.Some tips:
exec_debug
option can let you debug your code freely. Since the huge train dataset and the deep network, debug becomes so difficult. Here, you can setexec_debug = true
, and the train data would be fed from thedataroot_HR_debug
path anddataroot_LR_debug
, in which you can put a small quantity of train data. (This do helps me a lot!).resume
option allows you to continue training your model even after the interruption. By setting setresume = true
, the program will read the last saved check-point located inresume_path
.- We set
'data_type':'npy_reset'
to speed up data reading during training. Since reading a numpy file is faster than reading an image file, we first transform PNG file to numpy file. This process is only performed once when the first time data is accessed.
- After performing above modifications, you can start training process by runing
python train.py -opt options/train/train_DRUDN.json
. Then, you can get you model inexperiment/*****/epoch/best_epoch.pth
. You can also visualize the train loss and validation loss inresults/results.csv
.
For more results, please refer to our main papar.
If you find the code helpful in your resarch or work, please cite our paper:
Z. Li, Q. Li, W. Wu, et al., Deep recursive up-down sampling networks for single image super-resolution, Neurocomputing, 2019 (Accepted).
This code is built on BasicSR and RCAN. We thank the authors for sharing their codes.