-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modifying data_parallel_tutorial.py to enable multiple GPU support #2652
Conversation
Fixes pytorch#2563 Modifying data_parallel_tutorial.py, removed cuda:0 to enable using multiple GPUs.
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/2652
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit e786bd3 with merge base 77aec05 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please modify the PR title to clearly describe what are you trying to achieve there?
@malfet I have updated the title to reflect the task, do let me know if that looks good? |
@prithviraj-maurya thank you for the update. In that case, do you mind modifying this line as well:
|
@malfet I have updated the description of PR. |
When I run original code of the tutorial on multi-gpu system, I see it's already allocating memory on multiple GPU, so I'm not sure where the notion that replacing "cuda:0", with "cuda" would have any effect is coming from |
Removing cuda:0 from the comments
@malfet Ah, I see that now. Do you think this change might not be needed then? The original issue talked about changes required on that code. |
We need to close this as we found out that the changes are not needed per @malfet. Will grant half credit for the issue. |
Fixes #2563
Description
Modifying data_parallel_tutorial.py, by removing the specification "cuda:0," you enable the use of multiple GPUs. When you specify the index "0," it restricts the computation to only the GPU at index 0. However, if your system has multiple GPUs available, PyTorch will automatically distribute the computation across all available GPUs, resulting in faster training for deep learning tasks. This flexibility allows you to take full advantage of your GPU resources and potentially reduce training times when working with multiple GPUs.
Checklist
cc @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen