Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix RMSLE metric #3188

Merged
merged 6 commits into from
Aug 26, 2020
Merged

Fix RMSLE metric #3188

merged 6 commits into from
Aug 26, 2020

Conversation

Sordie
Copy link
Contributor

@Sordie Sordie commented Aug 26, 2020

What does this PR do?

Fixes #3162

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together? Otherwise, we ask you to create a separate PR for every change.
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?
  • Did you verify new and existing tests pass locally with your changes?
  • If you made a notable change (that affects users), did you update the CHANGELOG?

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@mergify mergify bot requested a review from a team August 26, 2020 07:35
@Sordie Sordie changed the title Fix RMSLE metric [WIP] Fix RMSLE metric Aug 26, 2020
@Sordie Sordie marked this pull request as draft August 26, 2020 07:59
@Sordie Sordie changed the title [WIP] Fix RMSLE metric Fix RMSLE metric Aug 26, 2020
@Sordie Sordie marked this pull request as ready for review August 26, 2020 08:29
@codecov
Copy link

codecov bot commented Aug 26, 2020

Codecov Report

Merging #3188 into master will increase coverage by 0%.
The diff coverage is 100%.

@@          Coverage Diff           @@
##           master   #3188   +/-   ##
======================================
  Coverage      90%     90%           
======================================
  Files          84      84           
  Lines        8008    8008           
======================================
+ Hits         7186    7187    +1     
+ Misses        822     821    -1     

@Borda Borda added bug Something isn't working Metrics labels Aug 26, 2020
@Borda Borda added the ready PRs ready to be merged label Aug 26, 2020
@mergify mergify bot requested a review from a team August 26, 2020 09:03
@SkafteNicki
Copy link
Member

We need some randomized tests against sklearn for the regression metrics. Something similar to:
https://github.com/PyTorchLightning/pytorch-lightning/blob/011235505544f95dacf42cffd545bb53dd4b6b15/tests/metrics/functional/test_classification.py#L38-L59
It would prevent such bugs.

@Borda Borda removed the ready PRs ready to be merged label Aug 26, 2020
@SkafteNicki
Copy link
Member

This should test the remaining 4 metrics that are not already being tested against another framework:

from functools import partial
from math import sqrt

from sklearn.metrics import (
    mean_absolute_error as mae_sk,
    mean_squared_error as mse_sk,
    mean_squared_log_error as msle_sk
)

@pytest.mark.parametrize(['sklearn_metric', 'torch_metric'], [
    pytest.param(mae_sk, mae, id='mean_absolute_error'),
    pytest.param(mse_sk, mse, id='mean_squared_error'),
    pytest.param(partial(mse_sk, squared=False), rmse, id='root_mean_squared_error'),
    pytest.param(lambda x,y: sqrt(msle_sk(x,y)), rmsle, id='root_mean_squared_log_error')
])
def test_against_sklearn(sklearn_metric, torch_metric):
    """Compare PL metrics to sklearn version."""
    device = 'cuda' if torch.cuda.is_available() else 'cpu'

    # iterate over different label counts in predictions and target
    pred = torch.rand(300, device=device)
    target = torch.rand(300, device=device)

    sk_score = sklearn_metric(target.cpu().detach().numpy(),
                              pred.cpu().detach().numpy())
    sk_score = torch.tensor(sk_score, dtype=torch.float, device=device)
    pl_score = torch_metric(pred, target)
    assert torch.allclose(sk_score, pl_score)

@Borda
Copy link
Member

Borda commented Aug 26, 2020

This should test the remaining 4 metrics that are not already being tested against another framework:

cool, mind add it to this PR directly 👍

@pep8speaks
Copy link

pep8speaks commented Aug 26, 2020

Hello @Sordie! Thanks for updating this PR.

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2020-08-26 09:35:07 UTC

@mergify mergify bot requested a review from a team August 26, 2020 10:11
@Borda Borda added the ready PRs ready to be merged label Aug 26, 2020
@williamFalcon williamFalcon merged commit 888340d into Lightning-AI:master Aug 26, 2020
@Borda Borda added this to the 0.9.x milestone Aug 27, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working ready PRs ready to be merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

RMSLE metric appears to be incorrect
6 participants