-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trying to reproduce your results #6
Comments
Hi. |
Oh thanks, I missed the comment in the training code. |
I refer the eval code on SelfExSR repo, which crops the scale_factor border pixels. |
Perfect! Thanks a lot! |
Hey @nmhkahn, Thanks in advanced! |
@Auth0rM0rgan |
Hey @nmhkahn, Thanks! |
We trained a few models 600K entirely and pick the best one, but you can pick the best steps if you train a single model. |
@nmhkahn hi, |
@zymize Hi. |
I'm also trying to reproduce the results. I notice a difference between matlab and python psnr implementation. Using the matlab code: pkg load image
im1 = imread('/datasets/Set5/image_SRF_4/img_001_SRF_4_bicubic.png');
im2 = imread('/datasets/Set5/image_SRF_4/img_001_SRF_4_HR.png');
d = compute_difference(im1, im2, 4) returns Using python: from piq import psnr
from torchvision.io import read_image
from torchvision.transforms import RandomCrop, Resize, GaussianBlur, Compose, Normalize, CenterCrop
from torchmetrics import PeakSignalNoiseRatio
import torch
import torch.nn as nn
from torch import Tensor
from torch.nn.functional import mse_loss as mse
from color import rgb_to_ycbcr
im1 = read_image('/datasets/Set5/image_SRF_4/img_001_SRF_4_bicubic.png')
im2 = read_image('/datasets/Set5/image_SRF_4/img_001_SRF_4_HR.png')
border = 4*2
border_removal = Compose([
CenterCrop((int(im1.shape[1]-border), int(im1.shape[2]-border))),
])
im1 = border_removal(im1)
im2 = border_removal(im2)
# using piq module
p = psnr(im1.float().unsqueeze(0), im2.float().unsqueeze(0), data_range=255., convert_to_greyscale=True, reduction='mean')
print(p)
# using torchmetrics module
psnr2 = PeakSignalNoiseRatio()
p = psnr2(rgb_to_ycbcr(im1.float())[0,:,:], rgb_to_ycbcr(im2.float())[0,:,:])
print(p)
# "manually" computing psnr
max_val = 255.0
print(10.0 * torch.log10(max_val**2 / mse(rgb_to_ycbcr(im1.float())[0,:,:], rgb_to_ycbcr(im2.float())[0,:,:], reduction='mean'))) gives:
Not sure where the difference comes from... maybe the RGB to YCbCr is different? |
Hi there,
Thank you very much for releasing this code!
I'm trying to reproduce your results. However, I guess I'm missing something...
On DIV2K Bicubic. Did you use bicubic Downscaling or unknown dowgrading operators?
After 575k Iteration on one single Titan X, I could only achieve following results on Urban100:
which is kind of far from the paper results :-(
Is it just bad luck with the initialization or do I miss something important?
Btw, I noticed that I can fit Batch 64 / Patch 64 on one single Titan X. When I use two, the second one loads only about 600Mb Memory. Is that a normal behavior?
Thanks a lot for your help!
The text was updated successfully, but these errors were encountered: