-
Notifications
You must be signed in to change notification settings - Fork 307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expected runtime for convnet_mnist.py? #7
Comments
I was able to profile the GPU and it turns out the CPU was being utilized the entire time (hence the long runtimes). I tried to compile the cudarray dependency with cuDNN support, but that lead to compilation errors. Is it possible to use deeppy on the GPU without cuDNN? |
Hey @jrosebr1 |
@lre Thanks for the comment. Just to clarify: setting CUDNN_ENABLED=1 will compile cudarray with cuDNN support (and in my case, leads to a compilation error). Given this, I removed the CUDNN_ENABLED environment variable and compiled cudarray as is. Was I supposed to set CUDNN_ENABLED=0 to indicate that I still want GPU support? |
@jrosebr1: Sorry about the lack of response from my part. I have been unable to work due to illness. From your first message it sounds like an error is preventing you from using the GPU. When using the GPU, CUDArray/DeepPy is very competitive speed-wise. Regarding CUDNN_ENABLED=0: In this case, CUDArray falls back to convolution by matrix multiplications on the GPU (Caffe style). While this is pretty fast compared to a CPU, I recommend using cuDNN. Feel free to ignore this post as you have probably moved on since then! :) |
@andersbll Thanks for the reply! I'll be sure to give cuDNN another try. I'm still not exactly sure what the error was in this case. When I set CUDNN_ENABLED=1 errors ended up being thrown. And when CUDNN_ENABLED=0, only the GPU was being utilized. |
Ok! Let me know if you run into any error messages. |
Is there an expected (ballpark, rough estimate) runtime for convnet_mnist.py? I ran mlp_mnist.py and the script finished extremely quickly. But for convnet_mnist.py, I've been sitting at the same output for over 30 minutes, which seems extremely high given that the Caffe MNIST examples finishes in a couple minutes:
INFO SGD: Model contains 127242 parameters.
INFO SGD: 469 mini-batch gradient updates per epoch.
(no extra output after this)
The text was updated successfully, but these errors were encountered: