Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where is ScaleBias layers? #2

Closed
ducha-aiki opened this issue Mar 16, 2016 · 4 comments
Closed

Where is ScaleBias layers? #2

ducha-aiki opened this issue Mar 16, 2016 · 4 comments

Comments

@ducha-aiki
Copy link

Original BN paper uses Scale-bias y=kx+b after BatchNorm. Are you omitting them intentionally?

@smichalowski
Copy link
Owner

Ive followed MXNET implementation creating this caffe prototxt. Have you tried to train this network with scale filter ?

@ducha-aiki
Copy link
Author

Not this, but other networks.
MXNet like other frameworks, and unlike caffe has scale-bias (gamma) inside batchnorm layer. While caffe-maintainers decided to have separate scale-bias layer
BVLC/caffe#3591
BVLC/caffe#3161
BVLC/caffe#3229

@smichalowski
Copy link
Owner

It seems you are right. Ill check your prototxt and merge your PR. Thank you :)

@ducha-aiki
Copy link
Author

@smichalowski you are welcome :)
P.S. It could be that real performance difference will be really small.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants