-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make extract_batch_size
a LightningModule.hook
#10635
Comments
extract_batch_size
a hookextract_batch_size
a LightningModule.hook
Can you comment on the differences/advantages this would bring vs. |
I am fine with the current implementation: #10408 (comment) |
This was exactly my thought!
|
Pros:
Cons:
Leaning towards not doing it. |
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team! |
Hello, I'm facing similar issue while I'm using |
it calls |
Thanks for replying and detailed explanation, @rohitgr7! One last question: in case of distributed training, does the |
Per GPU |
I am getting a
|
What's the structure specifically? Could you make changes to it so that it's infer-able? The relevant code is in: https://github.com/Lightning-AI/lightning/blob/38acba08fc009caac32b6b0ab8984d8935569db9/src/pytorch_lightning/utilities/data.py#L50-L96 |
🚀 Feature
Motivation
Copy pasting context from here: #10408 (review)
A generic idea from @ninginthecloud point of view, they think extract_data_batch() is this kind function we could convert as hook and delegate to user to implement, because they know their batch, it will be easily for them to define the batch_size or how to get batch_size. While here we had a lot assumptions about what the batch is going to look like, but we still can only extract valid batch_size under very limited situations.
Some comments:
#10408 (comment)
#10408 (comment)
Pitch
Alternatives
Additional context
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @Borda @justusschock @awaelchli @ninginthecloud @tchaton @ananthsub @carmocca @SeanNaren Any thoughts ?
The text was updated successfully, but these errors were encountered: