-
Notifications
You must be signed in to change notification settings - Fork 30.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
benchmark: use apache bench instead of wrk #6949
benchmark: use apache bench instead of wrk #6949
Conversation
Switches all HTTP benchmarks to Apache Benchmark instead of wrk. This allows for running benchmarks under Windows.
/cc @nodejs/benchmarking |
Just to add a quick note: I wrote https://github.com/mcollina/autocannon, which can create the same load of wrk (on my box), but written in node (and C++), and it runs on windows. Please have a look, the thing can be adapted and improved if needed. I'm 👎 on using ab, because I got more consistent results using wrk. |
IIRC |
@Fishrock123 the difference is that ab reasons in requests, and it fires N requests each seconds. wrk uses N connections, and it fires a new request on that connection once the previous one has finished. |
autocannon looks like a very good option. It would slightly differ from the current approach in that we would be benchmarking both the Node http server and the client layer. If that's a concern perhaps we can use a fixed version of Node for the client, and even script its download. |
@orangemocha autocannon does not use the node HTTP stack (it was too slow) it just uses It can probably be improved. |
@mcollina , understood. Regardless of which features of Node autocannon uses, if we want to rely on these benchmarks to compare http server performance in Node between version X and Y, we should avoid using versions X and Y as the traffic generation tool in each test, and rather stick to a fixed version Z. Otherwise, other performance differences in the traffic generation tool could skew results. As long as we use a fixed version of Node for the client, I think it won't matter that it's written in Node. Does that sound like a good approach to you? |
@orangemocha I think you are overly cautious, but yes I agree. We can probably download a node binary and use that. Do you know of a way to bundle up a node script with the binary? I do not think you will get different results from one version or another. Currently autocannon "saturates" a node server (100% CPU). As long as this happen, you will see identical results, independently of the node version. Try autocannon on an "hello world" server running on node v4, and try autocannon from v4 and v6, you should see almost identical results (if there is nothing else running on your box). |
The script can be part in the source tree, as part of the benchmark. I think all we need to download is the node executable. Autocannon could be either committed in the source tree or npm-installed before the benchmark is run. |
👍 let me know if you need anything from autocannon, or any help here in implementing this. |
BTW, autocannon can also test HTTP pipelining, I think @nodejs/http will be happy. We can probably update autocannon to use node internal HTTP parser, which I presume should be faster. I used the JS version to ease compatibility, and because I couldn't find an API doc for the core one. |
I'm fine with using |
Continued in #7180 with autocannon instead of apache bench. |
Checklist
Affected core subsystem(s)
benchmarks
Description of change
Switches all HTTP benchmarks to Apache Benchmark instead of
wrk
. This allows for running benchmarks under Windows.Apache Bench gives different results than wrk, but they are still consistent. For different parameters for the same benchmark parameters both tools provide similar relative performance.