Skip to content
This repository has been archived by the owner on Nov 23, 2019. It is now read-only.

Guidelines on how to test analyzer performance #15

Open
giggio opened this issue Nov 26, 2015 · 3 comments
Open

Guidelines on how to test analyzer performance #15

giggio opened this issue Nov 26, 2015 · 3 comments

Comments

@giggio
Copy link

giggio commented Nov 26, 2015

Some ideas:

  • How do we measure an analyzer performance?
  • Create a sample test project or tool
  • What are acceptable thresholds? Could we create thresholds to bind slow, regular, fast analyzers?
@sharwell
Copy link
Member

I've been attempting to get this working, but all of my standard approaches to controlled benchmarking are producing unreliable results. I'm leaning towards running a large number of passes using the same analyzer, throwing out a fixed number of outliers (high and low), and averaging the results. However, the number of passes required to get the confidence interval small enough for meaningful results is large, so it takes nearly 2 hours to calculate the values for just our own relatively small solution. I'm concerned that we would additionally need to test other projects (#16) before we're sure that the numbers accurately reflect the expected real-world performance.

@giggio
Copy link
Author

giggio commented Apr 13, 2016

This continues to be an issue. I just got an issue to look into that, and we don't even know where to start... code-cracker/code-cracker#766

@Meir017
Copy link

Meir017 commented Jun 21, 2018

any update on this?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants