-
Notifications
You must be signed in to change notification settings - Fork 218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Easy way to pick among multiple GPUs #174
Comments
The CUDAnative part of this is implemented: |
No news for this? demo link is broken |
Nice, didn't see that, thank you. So, on a gpu box I get this julia> ngpus = length(CUDAdrv.devices())
3
julia> CUDAnative.device!(2)
julia> CUDAdrv.device()
CuDevice(2): GeForce RTX 2080 Ti while on cpu-only one I get: julia> CUDAdrv.devices() |> length
ERROR: could not load library "libcuda"
.... Is this than a reasonable way to write a Flux script? gpu_id = 0 ## set < 0 for no cuda, >= 0 for using a specific device (if available)
if CUDAapi.has_cuda_gpu() && gpu_id >=0
CUDAdrv.device!(gpu_id)
CuArrays.allowscalar(false)
device = Flux.gpu
@info "Training on GPU-$(gpu_id)"
else
device = Flux.cpu
@info "Training on CPU"
end
model = model |> device
for x in data
x = x |> device
.... If so, maybe importing 3 cuda packages is not super smooth for the user, we could wrap some of that functionality within Flux |
You can inspect |
We could add a gpu_id = 0 ## set < 0 for no cuda, >= 0 for using a specific device (if available)
if CUDAapi.has_cuda_gpu() && gpu_id >=0
device = Flux.gpu!(gpu_id, allowscalar=false)
@info "Training on GPU-$(gpu_id)"
else
device = Flux.cpu
@info "Training on CPU"
end |
multi-gpu has greatly improved, so I think we can close this. |
It would be nice to have an easy way to pick one among multiple GPUs.
The text was updated successfully, but these errors were encountered: