-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit GPU binding with CUDA_VISIBLE_DEVICES or so #12
Comments
Lawd, that's a mess. To triage this properly, are there any consequences other than Also I'm unlikely to personally upgrade the drivers any time soon, and I don't like to fix bugs blind. I think the fix should be as simple as p = Popen(xorgargs, env={'CUDA_VISIBLE_DEVICES': display[1:]}) Would you be able to make this change yourself and test it out for a few days? If this particular change fails, try adding a |
Thank you! Sure, I'll check it out and report result here. |
Hi, I have tried the modification here but it didn't work. I have found another workaround.
with the specific gpu bus_id you would like coolgpus to take effect, e.g.
The bus id could be seen from the output of |
Hello, and, first all I'd like to thank you for project, it's still the best way we found to workaround NVIDIA cooling issues.
To the point. Thanks to latest NVIDIA drivers updates, now instead of usual primary contexts [with
nwidia-smi
tool] we have displayed all contexts created. So if earlier we've got output like this:...now we have:
Is it possible to limit Xorg processes with something like
CUDA_VISIBLE_DEVICES
environment variable ( https://developer.nvidia.com/blog/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/ )?I guess some minor changes are needed somewhere arond this line so each Xorg instance run like
CUDA_VISIBLE_DEVICES=1 Xorg ...
.The text was updated successfully, but these errors were encountered: