You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the latest version of lightning, you do not seem to be able to have multiple callbacks which can stop.
Please reproduce using the BoringModel
If you have mulitple callbacks which can do early stopping, only the last one can be active.
Create a callback with early stopping, MyStoppingCallback(). Add it, then EarlyStoppingCallback() to the callbacks argument of the trainer, e.g. callbacks = [MyStoppingCallback(), EarlyStoppingCallback('val_loss')]
The callback is triggered and calculates that it needs to stop, but it ontinues training
On the other hand, if you change the order (e.g. callbacks = [EarlyStoppingCallback('val_loss'),MyStoppingCallback()] it will be stop with MyStoppingCallback but probably doesn't triggle the EarlyStoppingCallback.
# Copyright The PyTorch Lightning team.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# --------------------------------------------# --------------------------------------------# --------------------------------------------# USE THIS MODEL TO REPRODUCE A BUG YOU REPORT# --------------------------------------------# --------------------------------------------# --------------------------------------------importosimporttorchfromtorch.utils.dataimportDatasetfrompl_examplesimportcli_lightning_logofrompytorch_lightningimportLightningModule, Trainerfrompytorch_lightningimportTrainerfrompytorch_lightning.callbacksimportEarlyStoppingfrompytorch_lightning.callbacksimportCallbackclassRandomDataset(Dataset):
""" >>> RandomDataset(size=10, length=20) # doctest: +ELLIPSIS <...bug_report_model.RandomDataset object at ...> """def__init__(self, size, length):
self.len=lengthself.data=torch.randn(length, size)
def__getitem__(self, index):
returnself.data[index]
def__len__(self):
returnself.lenclassBoringModel(LightningModule):
""" >>> BoringModel() # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE BoringModel( (layer): Linear(...) ) """def__init__(self):
""" Testing PL Module Use as follows: - subclass - modify the behavior for what you want class TestModel(BaseTestModel): def training_step(...): # do your own thing or: model = BaseTestModel() model.training_epoch_end = None """super().__init__()
self.layer=torch.nn.Linear(32, 2)
defforward(self, x):
returnself.layer(x)
defloss(self, batch, prediction):
# An arbitrary loss to have a loss that updates the model weights during `Trainer.fit` callsreturntorch.nn.functional.mse_loss(prediction, torch.ones_like(prediction))
defstep(self, x):
x=self.layer(x)
out=torch.nn.functional.mse_loss(x, torch.ones_like(x))
returnoutdeftraining_step(self, batch, batch_idx):
output=self.layer(batch)
loss=self.loss(batch, output)
return {"loss": loss}
deftraining_step_end(self, training_step_outputs):
returntraining_step_outputsdeftraining_epoch_end(self, outputs) ->None:
torch.stack([x["loss"] forxinoutputs]).mean()
defvalidation_step(self, batch, batch_idx):
output=self.layer(batch)
loss=self.loss(batch, output)
self.log('val_loss', loss)
return {"x": loss}
defvalidation_epoch_end(self, outputs) ->None:
torch.stack([x['x'] forxinoutputs]).mean()
deftest_step(self, batch, batch_idx):
output=self.layer(batch)
loss=self.loss(batch, output)
return {"y": loss}
deftest_epoch_end(self, outputs) ->None:
torch.stack([x["y"] forxinoutputs]).mean()
defconfigure_optimizers(self):
optimizer=torch.optim.SGD(self.layer.parameters(), lr=0.1)
lr_scheduler=torch.optim.lr_scheduler.StepLR(optimizer, step_size=1)
return [optimizer], [lr_scheduler]
# NOTE: If you are using a cmd line to run your script,# provide the cmd line as below.# opt = "--max_epochs 1 --limit_train_batches 1".split(" ")# parser = ArgumentParser()# args = parser.parse_args(opt)classEarlyStoppingExample(Callback):
defon_validation_end(self, trainer, pl_module):
iftrainer.current_epoch>5:
should_stop=Trueelse:
should_stop=Falseifbool(should_stop):
print("\nSTOPPING!!!!!!!!!!!!!!!!!!!!\n")
self.stopped_epoch=trainer.current_epochtrainer.should_stop=True# stop every ddp process if any world process decides to stopshould_stop=trainer.training_type_plugin.reduce_early_stopping_decision(should_stop)
trainer.should_stop=should_stopdeftest_run():
classTestModel(BoringModel):
defon_train_epoch_start(self) ->None:
pass# fake datatrain_data=torch.utils.data.DataLoader(RandomDataset(32, 64))
val_data=torch.utils.data.DataLoader(RandomDataset(32, 64))
test_data=torch.utils.data.DataLoader(RandomDataset(32, 64))
# modelearly_stopping=EarlyStopping('val_loss', patience=50)
model=TestModel()
trainer=Trainer(
default_root_dir=os.getcwd(),
limit_train_batches=1,
limit_val_batches=1,
max_epochs=100,
weights_summary=None,
callbacks=[
EarlyStoppingExample(),
early_stopping,
]
)
trainer.fit(model, train_data, val_data)
trainer.test(test_dataloaders=test_data)
if__name__=='__main__':
#cli_lightning_logo()test_run()
🐛 Bug
In the latest version of lightning, you do not seem to be able to have multiple callbacks which can stop.
Please reproduce using the BoringModel
MyStoppingCallback()
. Add it, thenEarlyStoppingCallback()
to the callbacks argument of the trainer, e.g.callbacks = [MyStoppingCallback(), EarlyStoppingCallback('val_loss')]
callbacks = [EarlyStoppingCallback('val_loss'),MyStoppingCallback()]
it will be stop withMyStoppingCallback
but probably doesn't triggle the EarlyStoppingCallback.To Reproduce
Use following BoringModel and post here
Expected behavior
conda
,pip
, source): pipThe text was updated successfully, but these errors were encountered: