-
-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding support for scikit-learn > 0.22 #936
Conversation
Codecov Report
@@ Coverage Diff @@
## develop #936 +/- ##
===========================================
+ Coverage 87.76% 87.79% +0.03%
===========================================
Files 37 37
Lines 4429 4433 +4
===========================================
+ Hits 3887 3892 +5
+ Misses 542 541 -1
Continue to review full report at Codecov.
|
openml/testing.py
Outdated
@@ -223,6 +225,10 @@ def _check_fold_timing_evaluations( | |||
"wall_clock_time_millis_testing": (0, max_time_allowed), | |||
"wall_clock_time_millis": (0, max_time_allowed), | |||
} | |||
if LooseVersion(sklearn.__version__) >= "0.23": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it these can't/don't need to get tested?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a placeholder check since there was only one unit test which had a slightly different failure point.
Looked into it and had to make this change to account for this change in K-means in scikit-learn 0.23.1
Not sure if this is the most scalable, long-standing solution.
For this unit test, I have currently edited this to get it to pass with scikit-learn 0.23.1. However, this is of course not a neat solution. I think that this problem stems from how the following 2 changes interact:
In summary, for sklearn 0.23.1, Thoughts, please? |
Given the linked
and
I propose we update our code to use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with Pieter that we should use signature here and make sure that we detect the number of hyperparameters with a default correctly. Maybe we could actually check for both (number of hyperparameters with a default and number of hyperparameters without a default to make that test a bit more strict).
tests/test_extensions/test_sklearn_extension/test_sklearn_extension.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with this PR. @PGijsbers what about you?
Looks good to me! Thanks! 🎉 |
* Create first section: Creating Custom Flow * Add Section: Using the Flow It is incomplete as while trying to explain how to format the predictions, I realized a utility function is required. * Allow run description text to be custom Previously the description text that accompanies the prediction file was auto-generated with the assumption that the corresponding flow had an extension. To support custom flows (with no extension), this behavior had to be changed. The description can now be passed on initialization. The description describing it was auto generated from run_task is now correctly only added if the run was generated through run_flow_on_task. * Draft for Custom Flow tutorial * Add minimal docstring to OpenMLRun I am not for each field what the specifications are. * Process code review feedback In particular: - text changes - fetch true labels from the dataset instead * Use the format utility function in automatic runs To format the predictions. * Process @mfeurer feedback * Rename arguments of list_evaluations (#933) * list evals name change * list evals - update * adding config file to user guide (#931) * adding config file to user guide * finished requested changes * Edit api (#935) * version1 * minor fixes * tests * reformat code * check new version * remove get data * code format * review comments * fix duplicate * type annotate * example * tests for exceptions * fix pep8 * black format * Adding support for scikit-learn > 0.22 (#936) * Preliminary changes * Updating unit tests for sklearn 0.22 and above * Triggering sklearn tests + fixes * Refactoring to inspect.signature in extensions * Add flake8-print in pre-commit (#939) * Add flake8-print in pre-commit config * Replace print statements with logging * Fix edit api (#940) * fix edit api * Update subflow paragraph * Check the ClassificationTask has class label set * Test task is of supported type * Add tests for format_prediction * Adding Python 3.8 support (#916) * Adding Python 3.8 support * Fixing indentation * Execute test cases for 3.8 * Testing * Making install script fail * Process feedback Neeratyoy * Test Exception with Regex Also throw NotImplementedError instead of TypeError for unsupported task types. Added links in the example. * change edit_api to reflect server (#941) * change edit_api to reflect server * change test and example to reflect rest API changes * tutorial comments * Update datasets_tutorial.py * Create first section: Creating Custom Flow * Add Section: Using the Flow It is incomplete as while trying to explain how to format the predictions, I realized a utility function is required. * Allow run description text to be custom Previously the description text that accompanies the prediction file was auto-generated with the assumption that the corresponding flow had an extension. To support custom flows (with no extension), this behavior had to be changed. The description can now be passed on initialization. The description describing it was auto generated from run_task is now correctly only added if the run was generated through run_flow_on_task. * Draft for Custom Flow tutorial * Add minimal docstring to OpenMLRun I am not for each field what the specifications are. * Process code review feedback In particular: - text changes - fetch true labels from the dataset instead * Use the format utility function in automatic runs To format the predictions. * Process @mfeurer feedback * Update subflow paragraph * Check the ClassificationTask has class label set * Test task is of supported type * Add tests for format_prediction * Process feedback Neeratyoy * Test Exception with Regex Also throw NotImplementedError instead of TypeError for unsupported task types. Added links in the example. Co-authored-by: Bilgecelik <38037323+Bilgecelik@users.noreply.github.com> Co-authored-by: marcoslbueno <38478211+marcoslbueno@users.noreply.github.com> Co-authored-by: Sahithya Ravi <44670788+sahithyaravi1493@users.noreply.github.com> Co-authored-by: Neeratyoy Mallik <neeratyoy@gmail.com> Co-authored-by: zikun <33176974+zikun@users.noreply.github.com>
What does this PR implement/fix? Explain your changes.
Adds support for scikit-learn 0.22.2 and above by ensuring all unit tests pass.
How should this PR be tested?
Checking travis and appveyor unit tests.
Any other comments?
The changes here reflect higher scikit-learn versions (>0.22) support for Python 3.6 and 3.7