-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CUDA] Multi-GPU and distributed training for new CUDA version. #5076
Comments
Hi, @shiyu1994 I find in the latest 4.5.0 there is no support for multi-gpu training. If do user will get errors: "Currently cuda version only supports training on a single GPU".
When I set num_gpus = 1, it works well. But when I set it to 2, the both gpu can allocate memory, but only one is running in 100%. and it seems never stop。 Any clue for it ?
And here are the nccl logs:
And the full stack info like this:
|
Summary
Add multi-gpu and distributed support for new CUDA version. As mentioned in #4630 (comment)
The text was updated successfully, but these errors were encountered: