Replies: 1 comment 1 reply
-
This feature is not yet fully supported. In principle, you can use the upload_results script. However, it requires all experiments to run successfully, as OpenML does not (yet) have a way to define partial results (for example, when a framework crashes on one of the folds). Even if all experiments are ran successfully, if you ran different folds with separate commands you probably need to reorganize the directory structure a bit, so that the upload script understand which results together form a completed run. PRs to improve the integration are welcome. If you are interested, or want to re-use some existing code to write a script yourself instead, have a look here:
Hopefully this is enough to get you started. I will be unavailable/very busy over the next few weeks due to the holidays and a packed January calendar - so my responses may be slow. Thanks for your understanding. |
Beta Was this translation helpful? Give feedback.
-
I have created 2 datasets. With one task for each dataset. And I have create a benchmark suite with the 2 tasks. And am now running automl frameworks on the tasks via runbenchmark.py. I am assuming this step will create runs? And then these runs can be uploaded to an OpenML "run collection". Where can I find documentation on how to upload the results (runs?) of each framework to OpenML?
Thank you in advance for you help.
Beta Was this translation helpful? Give feedback.
All reactions