Earlystopping callback metrics in different devices with single gpu #8267
Answered
by
awaelchli
Les1ie
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
-
The device of the metric return by validation_step is GPU, related code is def validation_step(self, batch, batch_idx):
x, y = batch
if y.device != self.device:
y = y.to(self.device)
y_hat = self(x)
loss = self.loss(y_hat, y) # loss.device is cuda.
self.log('valid loss', loss.item())
return loss After an epoch of validation compeleted when using earlystopping, follow error occured:
This error didn't appear until I updated the version of pytorch_lightning. |
Beta Was this translation helpful? Give feedback.
Answered by
awaelchli
Jul 5, 2021
Replies: 1 comment 1 reply
-
Looking into it in #8295 |
Beta Was this translation helpful? Give feedback.
1 reply
Answer selected by
Les1ie
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Looking into it in #8295