You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In downsampling part nn.ConvTranspose2d with kernel_size=3, stride=2 is used but seems with these parameters at some parts 3x3 kernel will cover only one input pixel, seems this will lead to checkerboard artifacts. Is it true? or maybe it 'was fixed' somewhere in other place of network for example conv with large kernel size at output? https://github.com/NVIDIA/pix2pixHD/blob/master/models/networks.py#L203
Hi, I am following up on this. Currently, my images have few gridding effects, however, would this be fixed when training with the local enhancer or would I need to restart training on the global generator?
Hi, I am following up on this. Currently, my images have few gridding effects, however, would this be fixed when training with the local enhancer or would I need to restart training on the global generator?
In downsampling part
nn.ConvTranspose2d
withkernel_size=3, stride=2
is used but seems with these parameters at some parts 3x3 kernel will cover only one input pixel, seems this will lead to checkerboard artifacts. Is it true? or maybe it 'was fixed' somewhere in other place of network for example conv with large kernel size at output?https://github.com/NVIDIA/pix2pixHD/blob/master/models/networks.py#L203
https://github.com/vdumoulin/conv_arithmetic/blob/master/gif/padding_strides_odd_transposed.gif
The text was updated successfully, but these errors were encountered: