You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i) an end-to-end demonstration of asv benchmarking on a PR branch vs say main showing that our setup is correct
ii) a documentation for how to run the ASV benchmarks
iii) [nice-to-have] if it is possible to have a simple Docker container that runs the ASV benchmarks that we can launch w/ a very simple command on any Linux-computer, that would be nice. The Docker container should probably have some fixed-amount of RAM, and CPU resources assigned to it too.
Experiments to include
Prolly use similar simulation setup they have and Increase feature dimensionality and sample dimensionality and run a separate benchmark for each tree/forest method.
We can leverage these results to eventually compare against what is in sklearn.
The text was updated successfully, but these errors were encountered:
adam2392
changed the title
Implement asv benchmarks similar to scikit-learn
[BENCHMARKING] Implement asv benchmarks similar to scikit-learn
Oct 31, 2023
Setup and Work Needed
Currently on
main
, we have a few files that should serve as a boiler-plate for setting up the ASV benchmarking suite: https://github.com/neurodata/scikit-tree/blob/main/asv.conf.json, https://github.com/neurodata/scikit-tree/tree/main/benchmarks.What really is needed is:
i) an end-to-end demonstration of asv benchmarking on a PR branch vs say
main
showing that our setup is correctii) a documentation for how to run the ASV benchmarks
iii) [nice-to-have] if it is possible to have a simple Docker container that runs the ASV benchmarks that we can launch w/ a very simple command on any Linux-computer, that would be nice. The Docker container should probably have some fixed-amount of RAM, and CPU resources assigned to it too.
Experiments to include
Prolly use similar simulation setup they have and Increase feature dimensionality and sample dimensionality and run a separate benchmark for each tree/forest method.
We can leverage these results to eventually compare against what is in sklearn.
The text was updated successfully, but these errors were encountered: