-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unmentioned but critical LayerNorm #3
Comments
I measured the performances of models without LayerNorm parts. In both renset18 and wide-resnet50, AUROC was quite similar, sometimes even better the original ones. Also DeiT showed comparable performances. (lower as 0.03~0.05) However in CaiT, the loss was crazily high and AUROC was 0.5! I can't understand why these models show different results depending on Layer Normalization. |
use x = x.flatten(2).transpose(1, 2) to reshape the featuremap BCHW -->B,N,C,thus layerNorm don't depend the input size |
maybe use BN after conv2d will work |
Well, after learning more about transformers, I realize that adding LayerNorm to intermediate output feature maps is very commom, such as applying transformers as the backbone in semantic segmentation (https://github.com/SwinTransformer/Swin-Transformer-Semantic-Segmentation/blob/87e6f90577435c94f3e92c7db1d36edc234d91f6/mmseg/models/backbones/swin_transformer.py#L620). So I guess that's why the paper never mentioned. And for resnet, maybe LayerNorm is not necessary as pointed out by @cytotoxicity8 . |
To achieve comparable result as the original paper. LayerNorm is applied to the feature before NF. This is never mentioned in the paper and the usage is very tricky (but this is the only way works for me):
The text was updated successfully, but these errors were encountered: