-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unexpected output shape value for BatchNorm1dToQuantScaleBias #450
Comments
Issue solved: There was two small changes I made which made it work. First, the default parameter Second, I was initially getting If you would like, I can make a pull request with these changes. |
Hi, Thanks for looking into this! |
Hi @green-cabbage many thanks, I've applied the solution. Hi @Giuseppe5 , I was wondering why BN1dScaleBias operation is a missing topic as BN1d is a very largely used in non-quantize context? Is it because as quantization is about low precision, then it is not supposed to use BN logic? |
Hey, I have been trying to implement brevitas' version of nn.BatchNorm1d(), and I receive this error:
ValueError: Using a target size (torch.Size([64, 1])) that is different to the input size (torch.Size([1, 30, 64, 1])) is deprecated. Please ensure they have the same size.
I have backtracked the error to the point where I believe this is the cause. When I run this:
the code prints:
torch.Size([1, 30, 64, 30])
when I would expect it to return something liketorch.Size([64,30]).
Do you guys know why the quant layer returns in that specific shape of[1, input_dim, batch_dim, input_dim]
? Is this behavior intentional? If so why?Thanks in advance
The text was updated successfully, but these errors were encountered: