Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

benchmark: use apache bench instead of wrk #6949

Closed

Conversation

bzoz
Copy link
Contributor

@bzoz bzoz commented May 24, 2016

Checklist
  • documentation is changed or added
  • the commit message follows commit guidelines
Affected core subsystem(s)

benchmarks

Description of change

Switches all HTTP benchmarks to Apache Benchmark instead of wrk. This allows for running benchmarks under Windows.

Apache Bench gives different results than wrk, but they are still consistent. For different parameters for the same benchmark parameters both tools provide similar relative performance.

Switches all HTTP benchmarks to Apache Benchmark instead of wrk. This
allows for running benchmarks under Windows.
@nodejs-github-bot nodejs-github-bot added the benchmark Issues and PRs related to the benchmark subsystem. label May 24, 2016
@evanlucas
Copy link
Contributor

/cc @nodejs/benchmarking

@mcollina
Copy link
Member

Just to add a quick note: I wrote https://github.com/mcollina/autocannon, which can create the same load of wrk (on my box), but written in node (and C++), and it runs on windows. Please have a look, the thing can be adapted and improved if needed.

I'm 👎 on using ab, because I got more consistent results using wrk.

@Fishrock123
Copy link
Contributor

IIRC wrk is better at doing heavy load?

@mcollina
Copy link
Member

mcollina commented May 24, 2016

@Fishrock123 the difference is that ab reasons in requests, and it fires N requests each seconds. wrk uses N connections, and it fires a new request on that connection once the previous one has finished.
Basically you can tune wrk to run enough load to not make your process explode and measure latency even on slow processes. 

@orangemocha
Copy link
Contributor

autocannon looks like a very good option. It would slightly differ from the current approach in that we would be benchmarking both the Node http server and the client layer. If that's a concern perhaps we can use a fixed version of Node for the client, and even script its download.

@mcollina
Copy link
Member

@orangemocha autocannon does not use the node HTTP stack (it was too slow) it just uses require('net') and http-parser-js. See https://github.com/mcollina/autocannon/blob/master/lib/myhttp.js.

It can probably be improved.

@orangemocha
Copy link
Contributor

@mcollina , understood. Regardless of which features of Node autocannon uses, if we want to rely on these benchmarks to compare http server performance in Node between version X and Y, we should avoid using versions X and Y as the traffic generation tool in each test, and rather stick to a fixed version Z. Otherwise, other performance differences in the traffic generation tool could skew results. As long as we use a fixed version of Node for the client, I think it won't matter that it's written in Node. Does that sound like a good approach to you?

@mcollina
Copy link
Member

@orangemocha I think you are overly cautious, but yes I agree. We can probably download a node binary and use that. Do you know of a way to bundle up a node script with the binary?

I do not think you will get different results from one version or another. Currently autocannon "saturates" a node server (100% CPU). As long as this happen, you will see identical results, independently of the node version. Try autocannon on an "hello world" server running on node v4, and try autocannon from v4 and v6, you should see almost identical results (if there is nothing else running on your box).

@orangemocha
Copy link
Contributor

Do you know of a way to bundle up a node script with the binary?

The script can be part in the source tree, as part of the benchmark. I think all we need to download is the node executable. Autocannon could be either committed in the source tree or npm-installed before the benchmark is run.

@mcollina
Copy link
Member

👍 let me know if you need anything from autocannon, or any help here in implementing this.

@mcollina
Copy link
Member

BTW, autocannon can also test HTTP pipelining, I think @nodejs/http will be happy.

We can probably update autocannon to use node internal HTTP parser, which I presume should be faster. I used the JS version to ease compatibility, and because I couldn't find an API doc for the core one.

@indutny
Copy link
Member

indutny commented May 26, 2016

I'm fine with using ab on windows, but using it everywhere - 👎

@bzoz
Copy link
Contributor Author

bzoz commented Jun 6, 2016

Continued in #7180 with autocannon instead of apache bench.

@bzoz bzoz closed this Jun 6, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmark Issues and PRs related to the benchmark subsystem.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants