Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

benchmark: allow benchmarks to specify flags #10448

Closed
wants to merge 2 commits into from

Conversation

joyeecheung
Copy link
Member

@joyeecheung joyeecheung commented Dec 25, 2016

Checklist
  • make -j4 test (UNIX), or vcbuild test (Windows) passes
  • tests and/or benchmarks are included
  • commit message follows commit guidelines
Affected core subsystem(s)

benchmark

Description of change
This PR makes it possible for the benchmarks to indicate what execution flags should be passed in when they are run, similar to the way tests do this.

EDIT: the intent of this PR has changed (#10448 (comment))

  • Give createBenchmark and the Benchmark constructor a third argument for specifying the command line flags that this benchmark should be run with.
  • Improve the documentation regarding how the benchmarks are run and what kind of code should go where.

Background: #10439

@nodejs-github-bot nodejs-github-bot added the benchmark Issues and PRs related to the benchmark subsystem. label Dec 25, 2016
@mscdex
Copy link
Contributor

mscdex commented Dec 25, 2016

IMHO just explicitly adding --expose-internals like common.js does is enough.

@joyeecheung
Copy link
Member Author

joyeecheung commented Dec 25, 2016

@mscdex There might be other flags needed in the future(e.g. from what I've grepped from the tests, GC flags, harmony flags, warning flags, --zero-fill-buffers, --max_old_space_size). Not necessarily used to make the code work, but could be used to prevent deoptimization / tune the benchmarks

@Trott
Copy link
Member

Trott commented Dec 25, 2016

/cc @nodejs/benchmarking @AndreasMadsen

(Aside: Do we need to update @nodejs/benchmarking membership? Kind of surprised @AndreasMadsen isn't in there, for starters.)

@addaleax
Copy link
Member

(Aside: Do we need to update @nodejs/benchmarking membership? Kind of surprised @AndreasMadsen isn't in there, for starters.)

Pinging @nodejs/benchmarking is pretty weird anyway, even @mscdex isn’t on it.

@AndreasMadsen
Copy link
Member

AndreasMadsen commented Dec 25, 2016

It is much better to implement these things in common.js (see: benchmark/common.js#L141). The reason is that we allow many different entry points to the benchmarks:

node benchmark/run.js
node benchmark/scatter.js
node benchmark/compare.js
node benchmark/util/normalize-encoding.js

so just adding this to compare.js doesn't fully solve the problem, but common.js is used in all cases.

If I understand correctly the problem is to call require('internal/'), the solution is to just put the require inside the main function.

@AndreasMadsen
Copy link
Member

AndreasMadsen commented Dec 25, 2016

@Trott the benchmarking working group has very little to do with benchmark/, they run their own benchmark suite which focuses on monitoring performance between node releases. That is why I'm not in the WG. The seperation is a little inconvenient because it makes it very difficult to get momentum behind benchmark/ related things, such as nodejs/benchmarking#58.

@joyeecheung
Copy link
Member Author

joyeecheung commented Dec 26, 2016

@AndreasMadsen I looked into the code a little bit futher, and If I understand correctly, compare.js/scatter.js/run.js pass in the flags when the benchmarks are compiled, while common.js pass in the flags when the benchmarks are run. That means if we want to get the code compiled under a certain flag, we have to either put the code inside main, or pass in the flags to compare.js/scatter.js/run.js. Simply moving the logic to common.js still doesn't make it possible to control the flags in compile time (hence require('internal/*') would still throw outside main).

Maybe the hints can be provided as:

// This gets passed in both phases
// Flags: --expose-internals 

// This gets passed when main is run, implemented in common.js
// RuntimeFlags: --max_old_space_size=1024

// This gets passed when main is compiled, implemented in compare.js and friends
// CompileTimeFlags: --harmony

This also makes it possible to pass other flags than the hard-corded --expose-internals at runtime, should we want to use them in the future(e.g. --allow_natives_syntax, although it can be passed via v8.setFlagsFromString, but not all the flags are v8 flags or can be suddenly turned on without major impact to the program).

If you think this unnecessary, please let me know. But even if we don't implement this feature, I think at least the benchmark doc should mention this caveat (require('internal/*') inside main).

@AndreasMadsen
Copy link
Member

@joyeecheung I'm not sure what the difference between compiling and running is in a JIT context.

My guess is that you think --expose_internals is somehow added during runtime, this is not the case. A benchmark script actually forks itself, which happens in benchmark/common.js#L141. This is to ensure that each benchmark configuration gets a fresh v8 instance.

Whatever solution we come up with it should be possible to run benchmarks using just node benchmark/util/normalize-encoding.js. Otherwise it becomes complicated to profile them.

@joyeecheung
Copy link
Member Author

joyeecheung commented Dec 27, 2016

@AndreasMadsen By running and compiling I was referring to the running/compiling of the benchmark script. So I looked into the code and turns out I somehow had a delusion that the script would get compiled once in https://github.com/nodejs/node/blob/master/benchmark/common.js#L26 and run once in https://github.com/nodejs/node/blob/master/benchmark/common.js#L24, but in fact it's compiled/run twice. So all the code in the benchmark script compiled by V8 in the first pass would not be relevant to those in the second pass, and in the first pass main is just dead code. My bad :P. Putting something like this on top of a benchmark

try {
  console.log(process.execArgv, process.argv.slice(2));
  const URL = require('internal/url').URL;
} catch(e) {
  console.log(e);
}

would print:

// first pass, not configured
[] []
{ Error: Cannot find module 'internal/url'
    at Function.Module._resolveFilename (module.js:469:15)
    at Function.Module._load (module.js:417:25)
    at Module.require (module.js:497:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (/Users/joyee/projects/node/benchmark/url/url-format.js:7:38)
    at Module._compile (module.js:570:32)
    at Object.Module._extensions..js (module.js:579:10)
    at Module.load (module.js:487:32)
    at tryModuleLoad (module.js:446:12)
    at Function.Module._load (module.js:438:3) code: 'MODULE_NOT_FOUND' }
// main is dead code in this pass, quickly exits

// second pass, got configured in common.js
[ '--expose_internals' ] [ 'n=25000000', 'type=one' ]
// main is run here

This is not a problem if the code outside main doesn't have side effects or requires special flags to run(like requireing a internal module), but IMO the doc should mention that any code beyond declarations should be put inside main.

Whatever solution we come up with it should be possible to run benchmarks using just node benchmark/util/normalize-encoding.js. Otherwise it becomes complicated to profile them.

I agree. So if we still keep the // Flags: stuff, it should be implemented in common.js, as you said earlier. And the docs can be updated to demonstrate what kinds of code goes where.

@AndreasMadsen
Copy link
Member

This is not a problem if the code outside main doesn't have side effects or requires special flags to run(like requireing a internal module), but IMO the doc should mention that any code beyond declarations should be put inside main.

Sounds like a good idea, I think benchmark/README.md#creating-a-benchmark would be the ideal place.

@joyeecheung
Copy link
Member Author

@AndreasMadsen So I revisited this PR again today and realized that if we want to make sure:

Whatever solution we come up with it should be possible to run benchmarks using just node benchmark/util/normalize-encoding.js. Otherwise it becomes complicated to profile them.

Then the best way for benchmarks to specify the flags that they wish to be run with is to pass them into createBenchmark. I've updated this PR to this direction. Let me know what you think :)

@joyeecheung joyeecheung changed the title benchmark: imply flags with comments in benchmarks benchmark: allow benchmarks to specify flags Dec 29, 2016
Copy link
Member

@AndreasMadsen AndreasMadsen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, adding it as an option to createBenchmark makes a lot more sense. I've sugested a few changes:

@@ -138,7 +139,7 @@ Benchmark.prototype._run = function() {

const child = child_process.fork(require.main.filename, childArgs, {
env: childEnv,
execArgv: ['--expose_internals'].concat(process.execArgv)
execArgv: ['--expose_internals', ...self.flags, ...process.execArgv]
Copy link
Member

@AndreasMadsen AndreasMadsen Dec 29, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unfortunately common.js needs to run on old node versions.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this heads up :)

@@ -3,17 +3,18 @@
const child_process = require('child_process');
const http_benchmarkers = require('./_http-benchmarkers.js');

exports.createBenchmark = function(fn, options) {
return new Benchmark(fn, options);
exports.createBenchmark = function(fn, options, flags) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better to make it exports.createBenchmark = function(fn, config, options), where options is an object e.g. { flags: [] }. I don't know why config is called options here, when it is not really optional.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. I also noticed that the docs name the second argument configs, so I will use configs as the local variable name here.

run the benchmarks with will be used. In the second pass,
the `main` function will be run, and the process will be run with:

* `--expose_internals`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should simplify it, such that --expose_internals is added explicitly in whatever tests it is required. You can just search for require('/internal.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think it's simpler. (And from what I've grepped, there is only one benchmark uses this :P)

* Give createBenchmark and the Benchmark constructor
  a third argument for specifying the command line flags
  that this benchmark should be run with.
  The benchmarks are no longer run with --expose-internals
  by default, they will need to explicitly pass the flags.
* Rename options to configs in createBenchmark and the Benchmark
  constructor to match the documentation since they are not optional.
* Comment the properties of a Benchmark object
@joyeecheung
Copy link
Member Author

@AndreasMadsen I've updated the commits, with some more comments added to the properties of Benchmark objects. PTAL and see if there is any misunderstanding about these properties.

```js
'use strict';
const common = require('../common.js');
const SlowBuffer = require('buffer').SlowBuffer;

const bench = common.createBenchmark(main, {
// If you want to benchmark the internal modules, add
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: you ready wrote this in the options object and in the main function. I think saying it once is fine.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed :)

Copy link
Member

@AndreasMadsen AndreasMadsen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, only a minor nit.

@AndreasMadsen AndreasMadsen requested a review from mscdex December 30, 2016 09:30
* Add detailed description of the arguments of `createBenchmark`
* Describe the two passes of running the benchmarks
* Suggest what kind of code should go where in the benchmark example
@joyeecheung
Copy link
Member Author

Ping @mscdex, I see @AndreasMadsen requested a review.

@mscdex
Copy link
Contributor

mscdex commented Jan 2, 2017

@mscdex
Copy link
Contributor

mscdex commented Jan 2, 2017

LGTM

@joyeecheung
Copy link
Member Author

joyeecheung commented Jan 4, 2017

If there are no more concerns about this I am going to land it. Inclined to land as two separate commits because the commit message could be a tad long if concatenated.

@gibfahn
Copy link
Member

gibfahn commented Jan 4, 2017

Two approvals and no issues, landing sounds good to me.

I'd just go for one commit, I don't think a long commit log is an issue (if you feel it's going into too much detail you can always remove some of it).

joyeecheung added a commit that referenced this pull request Jan 4, 2017
* Give createBenchmark and the Benchmark constructor
  a third argument for specifying the command line flags
  that this benchmark should be run with.
  The benchmarks are no longer run with --expose-internals
  by default, they will need to explicitly pass the flags.
* Rename options to configs in createBenchmark and the Benchmark
  constructor to match the documentation since they are not optional.
* Comment the properties of a Benchmark object

Also improve the documentation about creating benchmarks

* Add detailed description of the arguments of `createBenchmark`
* Describe the two passes of running the benchmarks
* Suggest what kind of code should go where in the benchmark example

PR-URL: #10448
Reviewed-By: Andreas Madsen <amwebdk@gmail.com>
Reviewed-By: Brian White <mscdex@mscdex.net>
@joyeecheung
Copy link
Member Author

Landed in 2826e63. Thanks!

@joyeecheung joyeecheung closed this Jan 4, 2017
evanlucas pushed a commit that referenced this pull request Jan 4, 2017
* Give createBenchmark and the Benchmark constructor
  a third argument for specifying the command line flags
  that this benchmark should be run with.
  The benchmarks are no longer run with --expose-internals
  by default, they will need to explicitly pass the flags.
* Rename options to configs in createBenchmark and the Benchmark
  constructor to match the documentation since they are not optional.
* Comment the properties of a Benchmark object

Also improve the documentation about creating benchmarks

* Add detailed description of the arguments of `createBenchmark`
* Describe the two passes of running the benchmarks
* Suggest what kind of code should go where in the benchmark example

PR-URL: #10448
Reviewed-By: Andreas Madsen <amwebdk@gmail.com>
Reviewed-By: Brian White <mscdex@mscdex.net>
@MylesBorins
Copy link
Contributor

This does not land cleanly in LTS. Added dont-land label. Please feel free to manually backport

@AndreasMadsen
Copy link
Member

It should not be backported, the entire benchmark suite was rewritten in v7 and declared a major change.

@joyeecheung joyeecheung deleted the benchmark-flags branch February 19, 2017 17:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmark Issues and PRs related to the benchmark subsystem.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants