-
Notifications
You must be signed in to change notification settings - Fork 441
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update _text_completion.py to support packed
mode
#1061
Update _text_completion.py to support packed
mode
#1061
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1061
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 6631b07 with merge base f9cb9e6 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Hi @andyl98! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
Thanks for this update. Indeed, having packing for text completion datasets is almost required for continued pre-training. I purposely left this out because I was concerned by the memory cost of packing, which would be more problematic for text completion datasets that tend to be much larger. But for local text corpuses this should be okay. Are you trying to train on local data? The code change looks good to me, do you mind just sharing a distributed run on text_completion_dataset both with and without packing? Curious to see if the performance gains are similar to instruct and chat, also good to ensure that you don't run into any OOMs :) |
Sounds good, will update the results once a training run is done :) And yes, I'm training on local data with a small set (<1B tokens in total). |
Hi @andyl98, were you able to try launching a run with this change? If you're having trouble let me know, and I could also launch a run on my end and get this merged in. |
Hi @RdoubleA sorry wasn't able to work on the project. I tested with small dataset and the results look as expected. However I do recognize the memory consumption issue with a larger dataset. Hopefully the mmap dataset feature will come soon! |
Appreciate you launching a test run! Looks good to me and it's great to see a similar bump in QPS for text completion. Yes, I am working on a more memory efficient dataset implementation, hopefully that will unlock using packing with any size dataset 😎 I'll run the CI and if there's no other issues get this merged. Thanks again! |
Co-authored-by: RdoubleA <rafiayub@fb.com>
Context
What is the purpose of this PR? Is it to
Please link to any issues this PR addresses.
GitHub issue: #1058
Changelog
What are the changes made in this PR?
Add support for
packed
mode inside_text_completion.py
Test plan
Please make sure to do each of the following if applicable to your PR. (If you're not sure about any one of these just ask and we will happily help.)
pre-commit install
)pytest tests
pytest tests -m integration_test