-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
efficientnet-b8 and AdvProp #115
Comments
Awesome, thanks for posting this. It's on the way. |
@lukemelas maybe your current code in “tf_to_pytorch” dir will work? |
@lukemelas any update on that ? |
@lukemelas any update on that ? |
Apologies for the delay on this (I had final exams this past week). Coming soon. |
Hi @lukemelas , just wondering if you had the chance to work on that one. |
Sorry this took forever. It should be in now :) Let me know if you have any issues. |
Closing this, but feel free to re-open it if you have any issues/questions. |
Dear all, I am using https://colab.research.google.com/drive/1Jw28xZ1NJq4Cja4jLe6tJ6_F5lCzElb4 Suryadi
|
Did you use the advprop image preprocessing or the usual preprocessing? See https://github.com/lukemelas/EfficientNet-PyTorch/blob/master/examples/imagenet/main.py#L211. That's the reason advprop is not enabled by default. Let me know if it still doesn't work and I can look into it. |
Hi @lukemelas , Thanks for the repo. Do you know the reason for using a different normalization for advprop? If I am training a new model with advprop, why should I use it than imagenet mean and std? |
one question, if I'm getting the gist of the paper right, It seems like advprop uses two batchnorm layer (one for standard data and the other one for adversarial data). However in the code, I don't see where it implements that other batchnorm layer. Am I misunderstanding the paper? Or is the code not providing it? |
@ooodragon94 I think that part is not in this repo. But it is easy to implement. Class EfficientNet():
def __init__(self, advprop=False, **kwargs):
self.somelayers = nn.Layer
self.norm = nn.BatchNorm
if advprop:
self.aux_norm = nn.BatchNorm
def forward(self, x, advprop=False):
x = self.somelayers(x)
if advprop:
x = self.aux_norm(x)
else:
x = self.norm(x) |
@shijianjian |
@ooodragon94 |
@shijianjian |
When using advprop pretrained weight and advprop normalization, the training results become very unstable, and the accuracy also decreases. |
@feiwofeifeixiaowo If you didn't by any chance implemented advprop but only loaded its weights, than it will definitely suffer in training accuracy. |
With advprop, efficientnet got greater score in ImageNet. Would you update to the new ckpt?
the paper: https://arxiv.org/pdf/1911.09665.pdf
https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
The text was updated successfully, but these errors were encountered: