Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmarks #321

Merged
merged 5 commits into from
Aug 31, 2019
Merged

Add benchmarks #321

merged 5 commits into from
Aug 31, 2019

Conversation

ericphanson
Copy link
Collaborator

@ericphanson ericphanson commented Aug 31, 2019

Uses PkgBenchmark to run benchmarks on each PR, using some nice code from @tkf's Transducers.jl.

The benchmarks themselves are written with an @add_benchmark macro taken from MOI. What this does is allow defining benchmarks which take a callback, and then using them to generate different benchmark suites easily:

include("benchmarks/benchmarks.jl") # defines module Benchmarks, with the actual benchmarks
const SUITE = BenchmarkGroup()
SUITE["SCS"] = Benchmarks.suite(p -> Convex.solve!(p, SCSSolver(verbose=0)))
SUITE["ECOS"] = Benchmarks.suite(p -> Convex.solve!(p, ECOSSolver(verbose=0)); exclude = [r"sdp", r"SDP"])
SUITE["formulation"] = Benchmarks.suite(Convex.conic_problem)

Here, you pass a callback for what to do with the problem (solve it with ECOS, with SCS, or just generate the MBP standard form with Convex.conic_problem). This runs all the benchmarks except those excluded by a regex search. This could also be used for comparing the speed of solvers on Convex.jl problems; maybe at some point we could even incorporate benchmark data into the docs, all via travis.

The actual benchmark problems themselves are mostly taken from Convex.jl's tests, and are certainly incomplete: they cover some of the affine and sdp tests only. I think they might be enough however to see the effect of large changes which affect all atoms more-or-less the same. Ideally at some point we could have benchmarks for every atom, like we have tests for each atom.

Hopefully I've got the CI bits done correctly. I suspect the tests will fail on this PR, because it will try to compare benchmarks to master, which doesn't have any, but that doesn't mean there's something wrong. The real test will be the next PR after this one is merged.

Edit: no, this PR passed because Travis doesn't use the PR's .travis.yml. That makes sense :).

This is priority 1 on #320.

@ericphanson
Copy link
Collaborator Author

I'll go ahead and merge this! Only thing that can go wrong is CI and I'll fix that if it's wrong.

@ericphanson ericphanson merged commit ee2dcd8 into jump-dev:master Aug 31, 2019
This was referenced Sep 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

1 participant