You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 345, in ddp_train
self.run_pretrain_routine(model)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 864, in run_pretrain_routine
self.train()
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 296, in train
self.reset_train_dataloader(model)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/data_loading.py", line 128, in reset_train_dataloader
self.train_dataloader = self.auto_add_sampler(self.train_dataloader, train=True)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/data_loading.py", line 112, in auto_add_sampler
dataloader = type(dataloader)(**dl_args)
File "../main/dataset.py", line 15, in __init__
super().__init__(*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'iterator'
Code sample
class _RepeatSampler(object):
def __init__(self, sampler):
self.sampler = sampler
def __iter__(self):
while True:
yield from iter(self.sampler)
class FastDataLoader(torch.utils.data.dataloader.DataLoader):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler))
self.iterator = super().__iter__()
def __len__(self):
return len(self.batch_sampler.sampler)
def __iter__(self):
for i in range(len(self)):
yield next(self.iterator)
replace Dataloader with FastDataLoader in lightning
(this snippet is from pytorch/pytorch#15849)
Expected behavior
Dataloaders initialize correctly and are reused between train/val/epochs (works as expected in 0.7.1)
ummm yeah. we should change the dataloader swap with swapping a dataloader init from the class or not swipe the dataloder at all but set the correct sampler.
And I don't think we can prevent this when we want to set correct samplers also in subclasses of DataLoader. We use all public attributes for reinitialization.
The probably easiest fix for you, would be to change self.iterator to self._iterator to avoid passing this argument in reinit.
If we just change the sampler, this might yield unexpected behaviour.
🐛 Bug
0.7.3 breaks reusable dataloaders in DDP
Code sample
replace Dataloader with FastDataLoader in lightning
(this snippet is from pytorch/pytorch#15849)
Expected behavior
Dataloaders initialize correctly and are reused between train/val/epochs (works as expected in 0.7.1)
Probable Cause
#1425
The text was updated successfully, but these errors were encountered: