-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add testing with PyTorch 1.11 on GPUs in CI #12955
Comments
@Borda Would it be an option to have PyTorch 1.12 (nightly) testing, too? For example, #12985 needs 1.12 for adapting FSDP native.
|
@akihironitta I think starting with 1.11 is a good idea and seeing how CI time works. I'm scared to use 1.12 nightly in CI as it changes frequently (but I haven't run into compatibility issues). |
1.11 is fine (already released) We removed nightly testing because it was too flaky, making everybody ignore the job. We only enable it when there's a release candidate upstream |
Will be addressed in #12984. |
🚀 Feature
We've decided to have testing with both PyTorch LTS and stable release (1.8 and 1.11 as of now) in CI, and we've already seen some issues arose while trying to enable it in #12373.
TODO
Known issues with PL with PyTorch 1.11
register_ddp_comm_hook
#12846deepspeed
andfairscale
versions #12860materialize_module
recursively setting its child module #12870Motivation
To test new features, e.g. meta init and native FSDP, in CI that are only available in newer PyTorch versions.
Pitch
Use the following image:
Alternatives
n/a
Additional context
n/a
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @carmocca @akihironitta @Borda
The text was updated successfully, but these errors were encountered: