Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expected runtime for convnet_mnist.py? #7

Open
jrosebr1 opened this issue Feb 24, 2015 · 6 comments
Open

Expected runtime for convnet_mnist.py? #7

jrosebr1 opened this issue Feb 24, 2015 · 6 comments

Comments

@jrosebr1
Copy link

Is there an expected (ballpark, rough estimate) runtime for convnet_mnist.py? I ran mlp_mnist.py and the script finished extremely quickly. But for convnet_mnist.py, I've been sitting at the same output for over 30 minutes, which seems extremely high given that the Caffe MNIST examples finishes in a couple minutes:

INFO SGD: Model contains 127242 parameters.
INFO SGD: 469 mini-batch gradient updates per epoch.
(no extra output after this)

@jrosebr1
Copy link
Author

I was able to profile the GPU and it turns out the CPU was being utilized the entire time (hence the long runtimes). I tried to compile the cudarray dependency with cuDNN support, but that lead to compilation errors. Is it possible to use deeppy on the GPU without cuDNN?

@lre
Copy link
Collaborator

lre commented Feb 25, 2015

Hey @jrosebr1
Yes it is possible to compile and run cudarray on the GPU without cuDNN. Then the matmul functions will be used. This can be controlled by setting CUDNN_ENABLED

@jrosebr1
Copy link
Author

@lre Thanks for the comment. Just to clarify: setting CUDNN_ENABLED=1 will compile cudarray with cuDNN support (and in my case, leads to a compilation error). Given this, I removed the CUDNN_ENABLED environment variable and compiled cudarray as is. Was I supposed to set CUDNN_ENABLED=0 to indicate that I still want GPU support?

@andersbll
Copy link
Owner

@jrosebr1: Sorry about the lack of response from my part. I have been unable to work due to illness.

From your first message it sounds like an error is preventing you from using the GPU. When using the GPU, CUDArray/DeepPy is very competitive speed-wise.

Regarding CUDNN_ENABLED=0: In this case, CUDArray falls back to convolution by matrix multiplications on the GPU (Caffe style). While this is pretty fast compared to a CPU, I recommend using cuDNN.

Feel free to ignore this post as you have probably moved on since then! :)

@jrosebr1
Copy link
Author

@andersbll Thanks for the reply! I'll be sure to give cuDNN another try. I'm still not exactly sure what the error was in this case. When I set CUDNN_ENABLED=1 errors ended up being thrown. And when CUDNN_ENABLED=0, only the GPU was being utilized.

@andersbll
Copy link
Owner

Ok! Let me know if you run into any error messages.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants