Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Metrics] AUROC error on multilabel + improved testing #3350

Merged

Conversation

SkafteNicki
Copy link
Member

What does this PR do?

Fixes #3303
Also improves metric testing by more tests against sklearn (related to #3230 )

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together? Otherwise, we ask you to create a separate PR for every change.
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?
  • Did you verify new and existing tests pass locally with your changes?
  • If you made a notable change (that affects users), did you update the CHANGELOG?

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@pep8speaks
Copy link

pep8speaks commented Sep 4, 2020

Hello @SkafteNicki! Thanks for updating this PR.

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2020-09-17 14:56:12 UTC

@mergify mergify bot requested a review from a team September 4, 2020 13:34
@codecov
Copy link

codecov bot commented Sep 7, 2020

Codecov Report

Merging #3350 into master will increase coverage by 3%.
The diff coverage is 100%.

@@           Coverage Diff           @@
##           master   #3350    +/-   ##
=======================================
+ Coverage      88%     91%    +3%     
=======================================
  Files         109     109            
  Lines        8436    8313   -123     
=======================================
+ Hits         7409    7562   +153     
+ Misses       1027     751   -276     

@mergify mergify bot requested a review from a team September 7, 2020 17:32
@Borda Borda added bug Something isn't working Metrics labels Sep 7, 2020
@mergify mergify bot requested a review from a team September 7, 2020 22:20
@mergify
Copy link
Contributor

mergify bot commented Sep 11, 2020

This pull request is now in conflict... :(

Copy link
Contributor

@awaelchli awaelchli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but be careful not to make tests too complicated.

tests/metrics/functional/test_classification.py Outdated Show resolved Hide resolved
tests/metrics/functional/test_classification.py Outdated Show resolved Hide resolved
@mergify mergify bot requested a review from a team September 11, 2020 12:36
SkafteNicki and others added 2 commits September 14, 2020 16:48
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
@mergify
Copy link
Contributor

mergify bot commented Sep 15, 2020

This pull request is now in conflict... :(

@edenlightning
Copy link
Contributor

mind looking at failing tests?

@mergify
Copy link
Contributor

mergify bot commented Sep 17, 2020

This pull request is now in conflict... :(

@Borda Borda added the ready PRs ready to be merged label Sep 20, 2020
@SkafteNicki SkafteNicki merged commit b1347c9 into Lightning-AI:master Sep 21, 2020
@SkafteNicki SkafteNicki deleted the metrics/error_on_multilabel branch October 8, 2020 14:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working ready PRs ready to be merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

AUROC metric should throw an error when used for multi-class problems
6 participants