-
-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make predictions from a run easily accessible #1103
Labels
Comments
In the meanwhile, you can access predictions with: import openml
import arff
r = openml.runs.get_run(RUN_ID)
response = openml._api_calls._download_text_file(r.predictions_url)
predictions_arff = arff.loads(response) |
PGijsbers
added a commit
that referenced
this issue
Mar 11, 2022
PGijsbers
added a commit
that referenced
this issue
Mar 11, 2022
Merged
PGijsbers
added a commit
that referenced
this issue
Apr 19, 2022
* Add easy way to retrieve run predictions * Log addition of ``predictions`` (#1103)
PGijsbers
added a commit
to Mirkazemi/openml-python
that referenced
this issue
Feb 23, 2023
* Add easy way to retrieve run predictions * Log addition of ``predictions`` (openml#1103)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
If I'm not overlooking anything, there is no convenient way to access predictions that are stored in a run. The
run
object only exposes functions which use the data internally (e.g.get_metric_fn
). Internally predictions are downloaded and processed (here).I propose we add a
predictions
property which can serve the predictions in dataframe format through lazy loading. The remainder of the code should then also be refactored to use this property.The text was updated successfully, but these errors were encountered: