-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Re-enable cuda graphs in training modes. #9343
Merged
galv
merged 2 commits into
main
from
cherry-pick-main-4cefd5d3636d6702a94b2c1d6a6c3a6edf123814
Jun 5, 2024
Merged
Re-enable cuda graphs in training modes. #9343
galv
merged 2 commits into
main
from
cherry-pick-main-4cefd5d3636d6702a94b2c1d6a6c3a6edf123814
Jun 5, 2024
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Re-enable cuda graphs in training modes. "global" capture mode was sporadically crashing because of pinning host memory in other threads spawned by the data loader when num_workers > 0. Add relevant changs to TDT cuda graphs decoding as well. I didn't test the TDT change because I'm not sure how. But it seems low risk. Signed-off-by: Daniel Galvez <dgalvez@nvidia.com> * Apply isort and black reformatting Signed-off-by: galv <galv@users.noreply.github.com> --------- Signed-off-by: Daniel Galvez <dgalvez@nvidia.com> Signed-off-by: galv <galv@users.noreply.github.com>
galv
approved these changes
May 29, 2024
galv
deleted the
cherry-pick-main-4cefd5d3636d6702a94b2c1d6a6c3a6edf123814
branch
June 5, 2024 01:02
BoxiangW
pushed a commit
to BoxiangW/NeMo
that referenced
this pull request
Jun 5, 2024
* Re-enable cuda graphs in training modes. "global" capture mode was sporadically crashing because of pinning host memory in other threads spawned by the data loader when num_workers > 0. Add relevant changs to TDT cuda graphs decoding as well. I didn't test the TDT change because I'm not sure how. But it seems low risk. * Apply isort and black reformatting --------- Signed-off-by: Daniel Galvez <dgalvez@nvidia.com> Signed-off-by: galv <galv@users.noreply.github.com> Co-authored-by: Daniel Galvez <galv@users.noreply.github.com> Co-authored-by: Somshubra Majumdar <titu1994@gmail.com> Signed-off-by: Boxiang Wang <boxiangw@nvidia.com>
janekl
pushed a commit
that referenced
this pull request
Jun 12, 2024
* Re-enable cuda graphs in training modes. "global" capture mode was sporadically crashing because of pinning host memory in other threads spawned by the data loader when num_workers > 0. Add relevant changs to TDT cuda graphs decoding as well. I didn't test the TDT change because I'm not sure how. But it seems low risk. * Apply isort and black reformatting --------- Signed-off-by: Daniel Galvez <dgalvez@nvidia.com> Signed-off-by: galv <galv@users.noreply.github.com> Co-authored-by: Daniel Galvez <galv@users.noreply.github.com> Co-authored-by: Somshubra Majumdar <titu1994@gmail.com> Signed-off-by: Jan Lasek <janek.lasek@gmail.com>
rohitrango
pushed a commit
to rohitrango/NeMo
that referenced
this pull request
Jun 25, 2024
* Re-enable cuda graphs in training modes. "global" capture mode was sporadically crashing because of pinning host memory in other threads spawned by the data loader when num_workers > 0. Add relevant changs to TDT cuda graphs decoding as well. I didn't test the TDT change because I'm not sure how. But it seems low risk. * Apply isort and black reformatting --------- Signed-off-by: Daniel Galvez <dgalvez@nvidia.com> Signed-off-by: galv <galv@users.noreply.github.com> Co-authored-by: Daniel Galvez <galv@users.noreply.github.com> Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
"global" capture mode was sporadically crashing because of pinning host memory in other threads spawned by the data loader when num_workers > 0.
This would cause the ASR_dev_run_Speech_To_Text_HF_Finetuning CI/CD test to fail sporadically (maybe 1 out of 5 times).
What does this PR do ?
Fixes the crash by using "thread_local" stream capture instead of "global" stream capture.
Collection: ASR
Usage
Cuda graphs will now be used by default for inference in both training and inference scripts, following the previous behavior.
GitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
PR Type:
Note that I tested this by applying the following diff:
Then I ran this script:
Basically, I needed to add a way for speech_to_text_finetune.py to be able to modify the decoding algorithm, so I could test both the loop frames and loop labels code paths. I do not include this code in the PR, since it is not robust to all model types (e.g., AED). Since I run 100 times for each algorithm, we can be pretty sure that this fixes the problem.