-
Notifications
You must be signed in to change notification settings - Fork 283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimizer state scaling #44
Conversation
Codecov Report
@@ Coverage Diff @@
## master #44 +/- ##
==========================================
+ Coverage 94.18% 94.28% +0.09%
==========================================
Files 35 35
Lines 2065 2100 +35
==========================================
+ Hits 1945 1980 +35
Misses 120 120
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
744713f
to
0eb5ad3
Compare
8295557
to
fce8747
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR. I have a few inline comments/questions.
This reverts commit 7f66480.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the changes. Looks good to me!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM overall!
@@ -236,10 +236,10 @@ def benchmark_language_model(train_data, val_data, test_data, model, criterion, | |||
|
|||
# Assert that memory usage on each GPU is within 10% of golden run | |||
# Right-hand-side is golden run bytes * 110% | |||
assert torch.cuda.memory_stats(0)["allocated_bytes.all.peak"] < 210479616 * 1.1 | |||
assert torch.cuda.memory_stats(0)["allocated_bytes.all.peak"] < 193206272 * 1.1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how did you come up with these number of bytes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had the same question in one of the previous PRs :D
I guess Jun-Ru printed out the value of torch.cuda.memory_stats(0)["allocated_bytes.all.peak"]
and put that number in the check!
Before submitting
What does this PR do?
Allows scaling of optimizer state.
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃