-
Notifications
You must be signed in to change notification settings - Fork 385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance/stress tests #808
Comments
Hi, i thought of a performance tester script with the followign characteristics and configuration. What do you think? Would that load represent a typical CI user? The purpose of this test case is to be able to dimension our server, discover any possible concurrency, timing, performance issues. |
There are two important cases that this script won't touch apparently. The first is networking overhead. I think the script should be written in a way that it connects to some server, not specifically on localhost. The other is caching and block optimisation. Perhaps it would be nice to have an (optional?) varying number of changed reports with varying bug path lengths. I fear after a few iterations the database server or the operating system will cache certain pages more than others due to the equal size of data being written, which ultimately makes us get better results than in a real operation. What would be the report generation principle? Will we use precompiled results or generate some valid plists in runtime.? |
Of course, the test should be performed over a "remote" network connection if possible. I added server connection parameters as mandatory option to the script for clarity. caching optimization: |
I would increase the report number at least to 5000 for each run and test multiple type of filters (severity, filename...) or some combination of them. |
A performance tester should simulate the load of multiple CI loop jobs.
Store-job-type:
-Store a run (say 1000 reports)
Update-Job-type:
-Perform a local compare mode for a run
-Update of a Run (say ~50 reports are fixed, ~50 new reports)
-List the bugs in a run without filter
-List the bugs in a run with filter
Delete-job-type:
-delete the run
The following should be configurable:
The threads should work on different runs, but on the same product.
-Measure the time needed for the execution of one job.
-Look for timeouts, connection resets.
-Codechecker server URL
Use a run with 1000 reports with for testing.
The text was updated successfully, but these errors were encountered: