You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that in the current version of master in the he_init function, gain is being passed for argument a.
But as per Pytorch code in both version 1.0 and 0.4.1, kaiming_normal_() expects a to be negative slope and has nonlinearity argument separately. So just following function call should do:
I have tested it on some layers, just above call gives the expected value to std() of layer.weight.
Your code might have been based on some different version of pytorch that did expect to pass gain separately but thought should you a heads up just in case.
The text was updated successfully, but these errors were encountered:
It seems that in the current version of
master
in thehe_init
function,gain
is being passed for argumenta
.But as per
Pytorch code
in both version1.0
and0.4.1
,kaiming_normal_()
expectsa
to be negative slope and hasnonlinearity
argument separately. So just following function call should do:Pytorch source code link: https://github.com/pytorch/pytorch/blob/v0.4.1/torch/nn/init.py#L296
I have tested it on some layers, just above call gives the expected value to
std()
oflayer.weight
.Your code might have been based on some different version of pytorch that did expect to pass
gain
separately but thought should you a heads up just in case.The text was updated successfully, but these errors were encountered: