-
Notifications
You must be signed in to change notification settings - Fork 207
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory used across all GPUs #180
Comments
we only allocate a tiny amount of scratch space. All combined to be less than 2MB afaik. |
Hmm, but the end effect is the same, there is 100MB GPU memory less for each process. It should be possible to start cutorch selectively just on a chosen subsets of GPUs. |
CUDA_VISIBLE_DEVICES=0,2 th [yourscript.lua] where you are telling it to use device 0 and device 2. |
Great, thanks! |
so if you use CUDA_VISIBLE_DEVICES=0 th [yourscript.lua], it means that you can only have 1 gpu to use and others are transparent to you, so there is no sense to use cutorch.setDevice(id) which can be used to switch the default gpu, right? if so, can you offer some guidlines about when to use CUDA_VISIBLE_DEVICES and when to use cutorch.setDevice(id)? |
@eriche2016 it makes sense to use it in the context of multi-GPUs. For example, with CUDA_VISIBLE_DEVICES=0,2 you select the GPUs you want to use (here 0 and 2), and with |
The current implementation allocates hundreds of MB of GPU memory on each GPU present in the system (at least 102MB per device as reported by
nvidia-smi
), just upon simplerequire 'cutorch'
. This doesn't change with subsequent calls likecutorch.setDevice()
. Is there any technical reason for this behavior?The text was updated successfully, but these errors were encountered: