-
Notifications
You must be signed in to change notification settings - Fork 327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test failed on v0.2 #11
Comments
Hi I hope you guys could have a look at this, it seems like a pretty major problem.
|
Hi, thanks for your info! I will look at it asap. |
@yifita what version of PyTorch are you using? |
@gpleiss I'm using cuda 8.0, cudnn 5.1.5, python 3.6, pytorch 0.1.12 |
I ran this on cuda 8.0, cudnn 5.1.5, python 3.6, and pytorch 0.1.12_2, and all the tests pass for me... |
Oh i'm sorry, I actually have pytorch 0.2.0! I wonder how I got the previous version |
@yifita I guess you'd like to have a look at the release note https://github.com/pytorch/pytorch/releases/tag/v0.2.0. Basically, some fundamental operations' api is changed. |
had a look, seems that they focused on broadcasting and indexing while adding a few layers that I wanted to experiment in my model. So i'd prefer sticking around with 0.2.0. |
I think you could do this yourself. Add these lines and clean up all the warning should make this code work under ver 0.2: # insert this to the top of your scripts (usually main.py)
import sys, warnings, traceback, torch
def warn_with_traceback(message, category, filename, lineno, file=None, line=None):
sys.stderr.write(warnings.formatwarning(message, category, filename, lineno, line))
traceback.print_stack(sys._getframe(2))
warnings.showwarning = warn_with_traceback; warnings.simplefilter('always', UserWarning);
torch.utils.backcompat.broadcast_warning.enabled = True
torch.utils.backcompat.keepdim_warning.enabled = True see the |
We will be catching this repo up soon! I'll try to get to it later today. |
@taineleau it doesn't seem to be that straightforward. The only warning i had is nn.Container deprecated, but changing it to nn.Module didn't solve the issue. |
@yifita hmmm... I come to realize that they might change the backend a little bit... |
yep :/ my guess is that it's related to double backpropagation |
Sorry for the late response. After looking at the v0.2's backend, I guess the convnd is broken mainly because they refactored the API a little bit. Currently, they have removed the Python level code calling the conv.backward from the code base so it's a little bit hard to learn the correct API from the CPP file. I have sent them an email to ask about the change, and hopefully, we could get some hints from them. |
Update: I shot an email to PyTorch's developer and he said the API of However, the good news is that I have run the models on PyTorch v0.2, the performance (both final accuracy and speed) remains the same as that on v0.1.12. So I guess this issue wouldn't be a big concern for now. |
For the torch._C.cudnn_convolution issue, I try to change the invoking as
Additional "False" parameter is added. It works in the master version of PyTorch (0.4). |
@mingminzhen This flag means using deterministic conv or not. This is not the issue though. Please check this topic. |
With #28, this repo is now compatible with PyTorch 0.3.x. Closing this issue. |
the efficient_densenet_bottleneck_test.py failed in test_backward_computes_backward_pass
I uncommented the code in densenet_efficient.py
but the issue persists.
The text was updated successfully, but these errors were encountered: