Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Easy way to pick among multiple GPUs #174

Closed
ViralBShah opened this issue Feb 5, 2018 · 7 comments
Closed

Easy way to pick among multiple GPUs #174

ViralBShah opened this issue Feb 5, 2018 · 7 comments
Labels
cuda array Stuff about CuArray. enhancement New feature or request

Comments

@ViralBShah
Copy link
Contributor

It would be nice to have an easy way to pick one among multiple GPUs.

@maleadt
Copy link
Member

maleadt commented May 24, 2018

The CUDAnative part of this is implemented: device!(::CuDevice), with automatic initialization of the CUDA API for device 0 if the user doesn't pick anything. To make this really useful, CuArray would probably have to know about which device it is tied to, or alternatively use unified memory (this seems like a much easier route, and aligns with recent CUDA toolkit developments).
Demo: https://github.com/JuliaGPU/CUDAnative.jl/blob/master/examples/multigpu.jl

@CarloLucibello
Copy link
Contributor

No news for this? demo link is broken

@maleadt
Copy link
Member

maleadt commented Mar 6, 2020

https://juliagpu.gitlab.io/CUDA.jl/usage/multigpu/

@CarloLucibello
Copy link
Contributor

Nice, didn't see that, thank you. So, on a gpu box I get this

julia> ngpus = length(CUDAdrv.devices())
3

julia> CUDAnative.device!(2)

julia> CUDAdrv.device()
CuDevice(2): GeForce RTX 2080 Ti

while on cpu-only one I get:

julia> CUDAdrv.devices() |> length
ERROR: could not load library "libcuda"
....

Is this than a reasonable way to write a Flux script?

gpu_id = 0  ## set < 0 for no cuda, >= 0 for using a specific device (if available)

if CUDAapi.has_cuda_gpu() && gpu_id >=0
    CUDAdrv.device!(gpu_id)
    CuArrays.allowscalar(false)
    device = Flux.gpu
    @info "Training on GPU-$(gpu_id)"
else
    device = Flux.cpu
    @info "Training on CPU"
end

model = model |> device
for x in data
    x = x |> device
    ....

If so, maybe importing 3 cuda packages is not super smooth for the user, we could wrap some of that functionality within Flux

@maleadt
Copy link
Member

maleadt commented Mar 6, 2020

You can inspect CuArrays.functional(), but yeah, the CUDAapi functions are supposed to be the user friendly ones. Flux already has a gpu function that uploads to the GPU if one is available.
https://github.com/FluxML/Flux.jl/blob/069d22869313d2eb7dc04d64dc7c7a819643acf7/src/functor.jl#L108

@CarloLucibello
Copy link
Contributor

CarloLucibello commented Mar 6, 2020

We could add a Flux.gpu! setting the device and with an allowscalar optional argument, so that the script simplifies to

gpu_id = 0  ## set < 0 for no cuda, >= 0 for using a specific device (if available)

if CUDAapi.has_cuda_gpu() && gpu_id >=0
    device = Flux.gpu!(gpu_id, allowscalar=false)
    @info "Training on GPU-$(gpu_id)"
else
    device = Flux.cpu
    @info "Training on CPU"
end

@maleadt maleadt transferred this issue from JuliaGPU/CuArrays.jl May 27, 2020
@maleadt maleadt added cuda array Stuff about CuArray. enhancement New feature or request labels May 27, 2020
@maleadt
Copy link
Member

maleadt commented Oct 29, 2020

multi-gpu has greatly improved, so I think we can close this.

@maleadt maleadt closed this as completed Oct 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cuda array Stuff about CuArray. enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants