You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This problem is logging precision.
If I use float32, logging process change my logging
(ex: .
validation_epoch_end(self, metrics)
'val_clk_sum': tensor(3211., device='cuda:0'),
-->
trainer.logged_metrics
'val_clk_sum': tensor(3210.9998, device='cuda:0'),
)
I know that if I use float 64 this problem doesn't happen.
But I don't want to use the trainer default of 64 because of logging.
Expected behavior
Above mentioned
Environment
PyTorch Lightning Version (e.g., 1.3.0):
PyTorch Version (e.g., 1.8)
Python version:
OS (e.g., Linux):
CUDA/cuDNN version:
GPU models and configuration:
How you installed PyTorch (conda, pip, source):
If compiling from source, the output of torch.__config__.show():
Any other relevant information:
Additional context
The text was updated successfully, but these errors were encountered:
See my previous response: #8887 (comment). A way for us to reproduce the issue is necessary to fix it.
Until then, I'll have to close this issue again. If you can provide the code and your exact environment so that it's reproducible, then we will re-open and look into this.
🐛 Bug
To Reproduce
#8887
#8930
This problem is logging precision.
If I use float32, logging process change my logging
(ex: .
validation_epoch_end(self, metrics)
'val_clk_sum': tensor(3211., device='cuda:0'),
-->
trainer.logged_metrics
'val_clk_sum': tensor(3210.9998, device='cuda:0'),
)
I know that if I use float 64 this problem doesn't happen.
But I don't want to use the trainer default of 64 because of logging.
Expected behavior
Above mentioned
Environment
conda
,pip
, source):torch.__config__.show()
:Additional context
The text was updated successfully, but these errors were encountered: