Skip to content

Super-Resolution: Pytorch implementation of LapSRN + Perceptual Loss

Notifications You must be signed in to change notification settings

EdwardTyantov/LapSRN

Repository files navigation

Pytorch-LapSRN

Implementation of the paper Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution + Perceptual loss instead of MSE.

Perseptual loss: VGG_16 with one input channel (Y-channel) with random weights. It's suitable according to A Powerful Generative Model Using Random Weights for the Deep Image Representation. Pretrained model didn't change anything (You can load it here).

Based on the code: https://github.com/BUPTLdy/Pytorch-LapSRN (a lot of refactoring and enhancements have been made).

Prerequisites

  • Linux (Ubuntu preferably)
  • Python
  • NVIDIA GPU
  • pytorch, torchvision

Usage

  • Train the model
python2.7 -u train.py --train_dir ~/real_photo_12k_resized --loss_type mse --lr 1e-5 --batchSize 32
python2.7 -u train.py --train_dir ~/real_photo_12k_resized --loss_type pl --lr 1e-2 --batchSize 20

To train perceptual loss with vgg: download model here, put to model/, pass pretrained_vgg=1 to train.py.

  • Test the model
python test.py

PS. Work on this repo is not complete (a bit of hardcode).

About

Super-Resolution: Pytorch implementation of LapSRN + Perceptual Loss

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages