You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
stateful dataloaders do not load their stat_dict and restore their state if trainer.estimated_stepping_batches is called
The situation pops up when one uses lr_scheduler.OneCycleLR which requires the total_steps
#- PyTorch Lightning Version (e.g., 2.5.0):
#- PyTorch Version (e.g., 2.5):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
Bug description
stateful dataloaders do not load their stat_dict and restore their state if trainer.estimated_stepping_batches is called
The situation pops up when one uses lr_scheduler.OneCycleLR which requires the total_steps
What version are you seeing the problem on?
v2.5
How to reproduce the bug
this code is adopted from PL test_resume_mid_epoch_warning
Error messages and logs
Environment
Current environment
More info
It has to do with trainer.estimated_stepping_batches that invokes self.fit_loop.setup_data() during strategy.setup and than when self.fit_loop.setup_data() invoked again in self._run_stage() it skips the state_dict loading
The text was updated successfully, but these errors were encountered: