Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

benchmark: remove %OptimizeFunctionOnNextCall #9615

Closed
wants to merge 11 commits into from

Conversation

bzoz
Copy link
Contributor

@bzoz bzoz commented Nov 15, 2016

Checklist
  • make -j8 test (UNIX), or vcbuild test nosign (Windows) passes
  • commit message follows commit guidelines
Affected core subsystem(s)

benchmark

Description of change

This removes all instances of %OptimizeFunctionOnNextCall from Node.js benchmarks. As it turns out, most bechmark benefit from its removal - they will perform better. See this table in gist.

Some of the benchmarks (buffers/buffer-swap.js, crypto/get-chiper.js, net/punnycode.js, path/parse-*.js, path/relative-*.js and tls/convertprotocols.js) benefit from warmup phase. Those are executed twice with only second run being measured. For other benchmarks warmup does not provide any advantage.

One benchmark that is slower is crypto/get-chiper.js, when calling getCiphers once. Previous version called this function once to trigger optimizations then to benchmark its performance. Results of that function are cached, so it didn’t provide valid data. This is fixed now.

Fixes: nodejs/node-chakracore#134
cc @nodejs/benchmarking

@bzoz bzoz added the benchmark Issues and PRs related to the benchmark subsystem. label Nov 15, 2016
@nodejs-github-bot nodejs-github-bot added the benchmark Issues and PRs related to the benchmark subsystem. label Nov 15, 2016
@evanlucas
Copy link
Contributor

/cc @mscdex since he's done a lot of performance work too

@mscdex
Copy link
Contributor

mscdex commented Nov 15, 2016

I think this should only be removed for functions that are potentially inlineable due to their function size.

@bzoz
Copy link
Contributor Author

bzoz commented Nov 24, 2016

@mscdex For most of the benchmarks except ones mentioned in PR removing the %OptimizeFunctionOnNextCall improves the performance. Shouldn't we at least remove it for those?

Could you also elaborate on those inlineable functions?

@bzoz bzoz force-pushed the bartek-remove-v8-optimize branch from bd2fd0b to 5e1d6ba Compare November 29, 2016 12:51
@bzoz
Copy link
Contributor Author

bzoz commented Nov 29, 2016

Rebased on master, PTAL

@jasnell
Copy link
Member

jasnell commented Dec 5, 2016

@mscdex ... any further thoughts on this?

@mscdex
Copy link
Contributor

mscdex commented Dec 5, 2016

As far as inlineability I would just check the current function lengths (there are other factors that determine inlineability of course). If the function being tested is less than 600, remove the forced optimization, otherwise leave it there.

@bzoz
Copy link
Contributor Author

bzoz commented Dec 9, 2016

Most of the time %OptimizeFunctionOnNextCall is used for either small functions or functions from node.js lib. Is there any benefit to have this %Optimize..?

If we remove it, it will make some of the benchmark perform better. Furthermore, this will make benchmarks engine agnostic, so they can be used with chakracore (see nodejs/node-chakracore#134)

@fhinkel
Copy link
Member

fhinkel commented Dec 9, 2016

I'm pretty sure that inlineability does not depend on function length any more but bytecode size (at least in the near future).

@fhinkel
Copy link
Member

fhinkel commented Dec 9, 2016

I'm in favor of making the benchmarks vm agnostic. I don't see the point in measuring only the optimized runs instead of all the runs.

@trevnorris
Copy link
Contributor

I'm pretty sure that inlineability does not depend on function length any more but bytecode size

As of 5.7.201 (and is the same on a6976211d1a4d12df815fa9ff5341dcbcf1e8036, latest master) https://github.com/v8/v8/blob/5.7.201/src/flag-definitions.h#L329-L330

@trevnorris
Copy link
Contributor

I don't see the point in measuring only the optimized runs instead of all the runs.

The idea is to remove variability in the benchmarks to better compare between versions. Ideally we could run the benchmarks fully optimized and run them not allowing to optimize the code at all.

@bzoz
Copy link
Contributor Author

bzoz commented Dec 27, 2016

@trevnorris So, are you in favor of this change? Or maybe you have other suggestions?

@bzoz
Copy link
Contributor Author

bzoz commented Jan 5, 2017

Rebased, PTAL

@bzoz
Copy link
Contributor Author

bzoz commented Jan 18, 2017

Rebased again, PTAL

@bzoz bzoz force-pushed the bartek-remove-v8-optimize branch from c1a6689 to d9f44c7 Compare February 24, 2017 14:20
@bzoz
Copy link
Contributor Author

bzoz commented Feb 24, 2017

/cc @nodejs/collaborators

@jasnell
Copy link
Member

jasnell commented Feb 24, 2017

@mscdex ... Side note: I've been wondering if it would be worthwhile to introduce linting for functions that are over the inlinable size. Sure, there are quite a few instances of fns in core over that size, and I'm not suggesting that we should refactor those, just that they should require linting exceptions.

@jasnell
Copy link
Member

jasnell commented Feb 24, 2017

@trevnorris ... I've had a todo on my list for a while to see if there was some way that we can build in checking optimized and non-optimized run's for all benchmarks. I definitely think it would be worthwhile.

Likewise, we should likely be making it easier to run the benchmarks in a way that compares use of the ignition toolchain vs. crankshaft.

@addaleax
Copy link
Member

@jasnell Won’t that go away with Turbofan anyway (v8/v8@0702ea3)?

@jasnell
Copy link
Member

jasnell commented Feb 24, 2017

Yes, sorry, I wasn't clear... it would be useful to be able to more easily run compare.js with various combinations of command line args... e.g. something like a ./node benchmark/compare.js --old "node" --new "node --ignition" stream ...

(if this already exists and I just missed it, then just ignore me ;-) ...)

@Trott
Copy link
Member

Trott commented Feb 25, 2017

Refocusing the conversation: I imagine @bzoz wants to know if anyone feels good enough about this to give it an approval or else to describe clearly what changes would need to happen to get your approval. Anyone?

Copy link
Member

@jasnell jasnell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I'm generally ok with the idea of removing the forced optimization, I'd rather it be done more incrementally than this... with either one benchmark or one related group of benchmarks edited per commit.

@evanlucas
Copy link
Contributor

This isn't landing cleanly on v7.x-staging. Mind submitting a backport PR?

bzoz added a commit to JaneaSystems/node that referenced this pull request Mar 8, 2017
Removes all instances of %OptimizeFunctionOnNextCall from benchmarks

Refs: nodejs#9615
Refs: nodejs#11720
@bzoz
Copy link
Contributor Author

bzoz commented Mar 8, 2017

@evanlucas backport here: #11744

italoacasas pushed a commit to italoacasas/node that referenced this pull request Mar 21, 2017
Removes all instances of %OptimizeFunctionOnNextCall from benchmarks

Refs: nodejs#9615
Refs: nodejs#11720
@jasnell jasnell mentioned this pull request Apr 4, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmark Issues and PRs related to the benchmark subsystem.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

SyntaxError occures during benchmark run