Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance/stress tests #808

Closed
dkrupp opened this issue Aug 15, 2017 · 4 comments
Closed

Performance/stress tests #808

dkrupp opened this issue Aug 15, 2017 · 4 comments

Comments

@dkrupp
Copy link
Member

dkrupp commented Aug 15, 2017

A performance tester should simulate the load of multiple CI loop jobs.

Store-job-type:
-Store a run (say 1000 reports)

Update-Job-type:
-Perform a local compare mode for a run
-Update of a Run (say ~50 reports are fixed, ~50 new reports)
-List the bugs in a run without filter
-List the bugs in a run with filter

Delete-job-type:
-delete the run

The following should be configurable:

  • The number of threads
  • the frequency of execution of the job types on a single thread

The threads should work on different runs, but on the same product.

-Measure the time needed for the execution of one job.
-Look for timeouts, connection resets.
-Codechecker server URL

Use a run with 1000 reports with for testing.

@dkrupp dkrupp added this to the 6.0 pre3 milestone Aug 15, 2017
@dkrupp
Copy link
Member Author

dkrupp commented Aug 15, 2017

Hi, i thought of a performance tester script with the followign characteristics and configuration. What do you think? Would that load represent a typical CI user?

The purpose of this test case is to be able to dimension our server, discover any possible concurrency, timing, performance issues.

@dkrupp dkrupp changed the title Implement performance tests Performance/stress tests Aug 15, 2017
@whisperity
Copy link
Contributor

There are two important cases that this script won't touch apparently. The first is networking overhead. I think the script should be written in a way that it connects to some server, not specifically on localhost.

The other is caching and block optimisation. Perhaps it would be nice to have an (optional?) varying number of changed reports with varying bug path lengths. I fear after a few iterations the database server or the operating system will cache certain pages more than others due to the equal size of data being written, which ultimately makes us get better results than in a real operation.


What would be the report generation principle? Will we use precompiled results or generate some valid plists in runtime.?

@dkrupp
Copy link
Member Author

dkrupp commented Aug 16, 2017

Of course, the test should be performed over a "remote" network connection if possible. I added server connection parameters as mandatory option to the script for clarity.

caching optimization:
The client should not be doing "re-analysis" between storage updates, because then we cannot generate high enough traffic. But of course this is also something we must consider.

@gyorb
Copy link
Contributor

gyorb commented Aug 16, 2017

I would increase the report number at least to 5000 for each run and test multiple type of filters (severity, filename...) or some combination of them.

@gyorb gyorb assigned bruntib and unassigned whisperity, csordasmarton, dkrupp and gyorb Oct 2, 2017
@gyorb gyorb modified the milestones: 6.0 pre3, release 6.0.2 Oct 2, 2017
@gyorb gyorb modified the milestones: release 6.1, release 6.2 Oct 12, 2017
@gyorb gyorb modified the milestones: release 6.2, release 6.3 Nov 10, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants