-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Define which GPU for inference? #1418
Comments
Which version of DIGITS are you using? |
version 5.1-dev I should precise that the gpu0 is busy by other frameworks such as tensorflow can digits see that? I am not sure |
Ah, thanks for the clarification. No, DIGITS isn't aware of what other processes may be doing on the GPU. You should isolate GPUs for DIGITS with |
Thanks for the quick solution! Still, the best would be to provide this option in the inference page to change the GPU on the fly. |
@lukeyeager I have an implementation of this that I can submit on a PR. but I don't know if the layout is okay for you. Could you please take a look at it? If the user doesn't have more than one GPU, nothing will change: But if the user has multiple GPUs, it looks like this: where Do you think of any test I should do before submitting it? I did some tests and I've been using this for a while on our lab. Everything seems to work properly. |
@rodrigoberriel Hello, will this make selection of GPU for inference from command line (curl.....) possible? As far as I know, you cannot decide for GPU at command line inference, no? Thanks, |
any news on this PR? if in the long run the CPU option is also added it is even better :) |
Wondering any updates on this? I still need inference applied by the first available gpu within digits environment but I think its locking inference to gpu0, in multigpu environments..? |
Hi,
How can we define which GPU to use in inference?
(same way as we can do for training)
It seems that digits always tries to use GPU0 even in the case where GPU0 is busy but GPU1 is free.
This comes from the fact that the infer one image calls inference.py with argument --gpu=0, which is a bug since gpu 0 has no memory left (gpu1 is free)
how is it decided which gpu to be used for inference?
The text was updated successfully, but these errors were encountered: