-
Notifications
You must be signed in to change notification settings - Fork 413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why do I get '2' as batch size? #168
Comments
Even you configure This behavior may cause errors when the network requires the input batch to be a specific value. To fix this problem, I modify the codes and let the testing tensor use Actually, it seems that the author has not maintained this package for a long time. I recommend you to try some alternatives like |
Thanks a lot. I wanted to ask why does it take '1' as batch size when I input a shape similar to an image, like I will definitely check out |
I do not understand your question. In your previous posts, you have not mentioned any batch with a batch size of By the way, I do not understand
either. Because you have mentioned that your output is DECODER
torch.Size([2, 2, 2]) Why do you say you do not see Here is a tip: if your are using torchsummary.summary(..., input_size=...) You should not let your |
@cainmagi Thanks! |
Hey,
This is a really great tool to visualize the model. However, I was trying to see how my decoder is working in the VAE and the input to the VAE is the latent space (dim =
(2,2)
). However, when I get the output, I see an extra2
there. Like this:summary(decoder, (2,2))
Output is:
My decoder is initialized like this:
Do let me know.
The text was updated successfully, but these errors were encountered: