Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strange visualization of the log during the test #8930

Closed
utrobinmv opened this issue Aug 16, 2021 · 4 comments · Fixed by #10076
Closed

Strange visualization of the log during the test #8930

utrobinmv opened this issue Aug 16, 2021 · 4 comments · Fixed by #10076
Labels
bug Something isn't working help wanted Open to be worked on logging Related to the `LoggerConnector` and `log()`

Comments

@utrobinmv
Copy link

utrobinmv commented Aug 16, 2021

🐛 Bug

Hello,

in module 'test_epoch_end' I calculate the final metric

score = self.metric(valid_preds, valid_labels)

print(score)

self.log("Final_metric", score)

logging_info('Test end Ok!')

but after getting the output, I found that the accuracy of the data written to the log differs from the original value

Result

console output:

0.8403041825095058
Test end Ok!

DATALOADER:0 TEST RESULTS
{'Final_metric': 0.8403041958808899}

score 0.8403041825095058 != 0.8403041958808899

I think I can draw the wrong conclusion from such an error in the results?

pytorch-lightning==1.4.1
torch==1.9.0

@utrobinmv utrobinmv added bug Something isn't working help wanted Open to be worked on labels Aug 16, 2021
@tchaton
Copy link
Contributor

tchaton commented Aug 16, 2021

Dear @utrobinmv,

Would you mind opening providing a reproducible script using the boring_model: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/bug_report_model.py

It is unlikely we would be able to help otherwise.

Best,
T.C

@qqueing
Copy link
Contributor

qqueing commented Aug 17, 2021

this is similar : #8887

@carmocca carmocca added the logging Related to the `LoggerConnector` and `log()` label Aug 18, 2021
@SkafteNicki
Copy link
Member

SkafteNicki commented Aug 20, 2021

This is due to the precision in float32 vs float64:

print(torch.tensor([0.8403041825095058], dtype=torch.float32).item())
# 0.8403041958808899

print(torch.tensor([0.8403041825095058], dtype=torch.float64).item())
# 0.8403041825095058

@tchaton
Copy link
Contributor

tchaton commented Aug 22, 2021

I believe @SkafteNicki provided an answer and unfortunately, there is nothing we can do unless recommending you to use Trainer(precision=64).

Closing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Open to be worked on logging Related to the `LoggerConnector` and `log()`
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants