Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarks #43

Open
adamczykm opened this issue Aug 21, 2015 · 2 comments
Open

Benchmarks #43

adamczykm opened this issue Aug 21, 2015 · 2 comments

Comments

@adamczykm
Copy link

Have you considered adding some simple time & space benchmarking of the libraries?

@gelisam
Copy link
Owner

gelisam commented Aug 21, 2015

Maybe. Is this a dimension in which libraries differ significantly enough to make you pick one over another? The cost would be high, so we should only do it if the payoff is high enough.

The costs include:

  • The zoo will feel more competitive. The current list doesn't try to rank libraries from best to worst, and instead tries to explain the different choices made by each library. The assumption is that some choices are better in some circumstances than in others, or at the very least that some users will prefer one API over another. A benchmark necessarily orders libraries from fastest to slowest, and that's not the kind of zoo I had in mind.
  • Implementing the benchmark in all libraries. There's already a lot of work remaining in order to cover all the libraries, and now we'd have even more work per library. We can't use the existing task and measure its performance because gloss and the user clicking the buttons with the mouse are skewing the results.
  • Being fair would be even more work. Microbenchmarking is hard, and it's probably the case that some libraries are faster under some circumstances (say, when the event network is static) and other libraries are better in others. So in order to accurately demonstrate these nuances, we'd have to implement several benchmarks per library.

The benefits include:

  • It will allow users who care about performance to pick the library which fits their needs better.
  • It would give another dimension along which to compare libraries. Right now there are many libraries which are tagged with the exact same set of keywords, so I don't feel like the zoo makes a very good job of summarizing the differences between those.
  • If as I suspect, well-maintained libraries like reactive-banana and sodium are more performant than relatively unknown libraries like DysFRP, the benchmark will give us an objective reason to promote the well-maintained libraries.

Anyway. Is there a particular benchmark you had in mind?

@adamczykm
Copy link
Author

Nice review!
"Is this a dimension in which libraries differ significantly enough to make you pick one over another?"
Yes, that's likely.
It's mainly because of the benefits you listed I asked about benchmarking. Plus, there might exist libraries that especially focus on providing you with ways to write faster code, and this effort is completely ignored in the current zoo. But I can see that implementing benchmarks will require a lot of work and rewriting for a completely new task.
In the end, in my opinion, it is the library developer's duty to provide informations about the performance instead of forcing users to waste hours on profiling their work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants