Skip to content

Making your benchmark of optimization algorithms simple and open

License

Notifications You must be signed in to change notification settings

Melvin-klein/benchopt

 
 

Repository files navigation

https://raw.githubusercontent.com/benchopt/communication_materials/main/posters/images/logo_benchopt.png

—Making your ML and optimization benchmarks simple and open—


Test Status codecov Documentation Python 3.6+ install-per-months discord SWH

Benchopt is a benchmarking suite tailored for machine learning workflows. It is built for simplicity, transparency, and reproducibility. It is implemented in Python but can run algorithms written in many programming languages.

So far, benchopt has been tested with Python, R, Julia and C/C++ (compiled binaries with a command line interface). Programs available via conda should be compatible as well. See for instance an example of usage with R.

Install

It is recommended to use benchopt within a conda environment to fully-benefit from benchopt Command Line Interface (CLI).

To install benchopt, start by creating a new conda environment and then activate it

conda create -n benchopt python
conda activate benchopt

Then run the following command to install the latest release of benchopt

pip install -U benchopt

It is also possible to use the latest development version. To do so, run instead

pip install git+https://github.com/benchopt/benchopt.git

Getting started

After installing benchopt, you can

  • replicate/modify an existing benchmark
  • create your own benchmark

Using an existing benchmark

Replicating an existing benchmark is simple. Here is how to do so for the L2-logistic Regression benchmark.

  1. Clone the benchmark repository and cd to it
git clone https://github.com/benchopt/benchmark_logreg_l2
cd benchmark_logreg_l2
  1. Install the desired solvers automatically with benchopt
benchopt install . -s lightning -s sklearn
  1. Run the benchmark to get the figure below
benchopt run . --config ./example_config.yml
https://benchopt.github.io/_images/sphx_glr_plot_run_benchmark_001.png

These steps illustrate how to reproduce the L2-logistic Regression benchmark. Find the complete list of the Available benchmarks. Also, refer to the documentation to learn more about benchopt CLI and its features. You can also easily extend this benchmark by adding a dataset, solver or metric. Learn that and more in the Benchmark workflow.

Creating a benchmark

The section Write a benchmark of the documentation provides a tutorial for creating a benchmark. The benchopt community also maintains a template benchmark to quickly and easily start a new benchmark.

Finding help

Join benchopt discord server and get in touch with the community! Feel free to drop us a message to get help with running/constructing benchmarks or (why not) discuss new features to be added and future development directions that benchopt should take.

Citing Benchopt

Benchopt is a continuous effort to make reproducible and transparent ML and optimization benchmarks. Join us in this endeavor! If you use benchopt in a scientific publication, please cite

@inproceedings{benchopt,
   author    = {Moreau, Thomas and Massias, Mathurin and Gramfort, Alexandre
                and Ablin, Pierre and Bannier, Pierre-Antoine
                and Charlier, Benjamin and Dagréou, Mathieu and Dupré la Tour, Tom
                and Durif, Ghislain and F. Dantas, Cassio and Klopfenstein, Quentin
                and Larsson, Johan and Lai, En and Lefort, Tanguy
                and Malézieux, Benoit and Moufad, Badr and T. Nguyen, Binh and Rakotomamonjy,
                Alain and Ramzi, Zaccharie and Salmon, Joseph and Vaiter, Samuel},
   title     = {Benchopt: Reproducible, efficient and collaborative optimization benchmarks},
   year      = {2022},
   booktitle = {NeurIPS},
   url       = {https://arxiv.org/abs/2206.13424}
}

Available benchmarks

Problem Results Build Status
Ordinary Least Squares (OLS) Results Build Status OLS
Non-Negative Least Squares (NNLS) Results Build Status NNLS
LASSO: L1-Regularized Least Squares Results Build Status Lasso
LASSO Path Results Build Status Lasso Path
Elastic Net   Build Status ElasticNet
MCP Results Build Status MCP
L2-Regularized Logistic Regression Results Build Status LogRegL2
L1-Regularized Logistic Regression Results Build Status LogRegL1
L2-regularized Huber regression   Build Status HuberL2
L1-Regularized Quantile Regression Results Build Status QuantileRegL1
Linear SVM for Binary Classification   Build Status LinearSVM
Linear ICA   Build Status LinearICA
Approximate Joint Diagonalization (AJD)   Build Status JointDiag
1D Total Variation Denoising   Build Status TV1D
2D Total Variation Denoising   Build Status TV2D
ResNet Classification Results Build Status ResNetClassif
Bilevel Optimization Results Build Status Bilevel

About

Making your benchmark of optimization algorithms simple and open

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 74.9%
  • JavaScript 8.8%
  • HTML 8.1%
  • CSS 7.3%
  • Shell 0.6%
  • R 0.2%
  • Other 0.1%