-
-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature #753 #932
Feature #753 #932
Conversation
It is incomplete as while trying to explain how to format the predictions, I realized a utility function is required.
Previously the description text that accompanies the prediction file was auto-generated with the assumption that the corresponding flow had an extension. To support custom flows (with no extension), this behavior had to be changed. The description can now be passed on initialization. The description describing it was auto generated from run_task is now correctly only added if the run was generated through run_flow_on_task.
Thanks for the feedback :) will process after the meeting. |
I am not for each field what the specifications are.
In particular: - text changes - fetch true labels from the dataset instead
To format the predictions.
* list evals name change * list evals - update
* adding config file to user guide * finished requested changes
* version1 * minor fixes * tests * reformat code * check new version * remove get data * code format * review comments * fix duplicate * type annotate * example * tests for exceptions * fix pep8 * black format
* Preliminary changes * Updating unit tests for sklearn 0.22 and above * Triggering sklearn tests + fixes * Refactoring to inspect.signature in extensions
* Add flake8-print in pre-commit config * Replace print statements with logging
* fix edit api
Codecov Report
@@ Coverage Diff @@
## develop #932 +/- ##
===========================================
+ Coverage 87.65% 88.64% +0.98%
===========================================
Files 37 37
Lines 4383 4941 +558
===========================================
+ Hits 3842 4380 +538
- Misses 541 561 +20
Continue to review full report at Codecov.
|
@mfeurer I don't see any reduced coverage, (where) am I reading the codecov report wrong? |
I think it complains that not enough of the diff is tested. 76% of your diff is tested, while codecov expects 87.65% of the diff being tested. I guess the 87.65% is our current test coverage. |
Ah, I was looking at the wrong tab 😓 looks like I'll have to add some tests that check the error cases / learning curve tasks. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey Pieter, I'm afraid I have some more questions on this example (and a few minor change requests)
Also throw NotImplementedError instead of TypeError for unsupported task types. Added links in the example.
* change edit_api to reflect server * change test and example to reflect rest API changes * tutorial comments * Update datasets_tutorial.py
It is incomplete as while trying to explain how to format the predictions, I realized a utility function is required.
Previously the description text that accompanies the prediction file was auto-generated with the assumption that the corresponding flow had an extension. To support custom flows (with no extension), this behavior had to be changed. The description can now be passed on initialization. The description describing it was auto generated from run_task is now correctly only added if the run was generated through run_flow_on_task.
I am not for each field what the specifications are.
In particular: - text changes - fetch true labels from the dataset instead
To format the predictions.
Also throw NotImplementedError instead of TypeError for unsupported task types. Added links in the example.
…into feature_#753
I did a rebase on develop (because edit api tests failed), locally it looked like it worked fine. I thought the old commits should've simply been replaced by the ones patched onto the develop head? (i.e. same code diff, different commit id) |
Yeah, I don't know how to do this probably either, I'm usually rebasing to avoid such hassle.
No, shouldn't matter. Shall I do a final review and then merge? |
I normally merge because it's less of a hassle. After the failed rebase I actually found the general advise not to rebase if the work already lives on remote.
That would be greatly appreciated! The Travis fails seem to be on the openml server side. |
Looking for feedback. For the example I still need to finalize how the predictions are generated/formatted to make it clear and not distract from the overall example.