Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

benchmark: pre-optimize url.parse() before start #132

Merged
merged 1 commit into from
Dec 19, 2014

Conversation

bnoordhuis
Copy link
Member

Force V8 to optimize url.parse() before starting the actual benchmark.
Tries to minimize variance between successive runs caused by the
optimizer kicking in at different points.

It does not seem to have much impact, CPU times are roughly the same
before and afterwards; url.parse() quickly plateaus at a local optimum
where most time is spent in V8 builtins, notably Runtime_StringSplit()
and Object::GetElementWithReceiver() calls originating from
deps/v8/src/uri.js, with no recurring optimize/deoptimize cycles that
I could spot.

Still, I don't see any downsides to pre-optimizing the function being
benchmarked so in it goes.

R=@chrisdickinson. Please ignore the first commit, that's #131 (which is a prerequisite of this PR, however.)

@indutny
Copy link
Member

indutny commented Dec 10, 2014

Huh, sorry hit cmd+enter earlier than I thought. I thought there was a downside in not feeding enough type info, but I see now that you already incorporated this into this PR.

@indutny
Copy link
Member

indutny commented Dec 10, 2014

LGTM

Force V8 to optimize url.parse() before starting the actual benchmark.
Tries to minimize variance between successive runs caused by the
optimizer kicking in at different points.

It does not seem to have much impact, CPU times are roughly the same
before and afterwards; url.parse() quickly plateaus at a local optimum
where most time is spent in V8 builtins, notably Runtime_StringSplit()
and Object::GetElementWithReceiver() calls originating from
deps/v8/src/uri.js, with no recurring optimize/deoptimize cycles that
I could spot.

Still, I don't see any downsides to pre-optimizing the function being
benchmarked so in it goes.

PR-URL: nodejs#132
Reviewed-By: Chris Dickinson <christopher.s.dickinson@gmail.com>
Reviewed-By: Fedor Indutny <fedor@indutny.com>
@bnoordhuis bnoordhuis merged commit 1a63b45 into nodejs:v0.12 Dec 19, 2014
@bnoordhuis bnoordhuis deleted the optimize-on-next-function-call branch December 19, 2014 22:16
kunalspathak referenced this pull request in nodejs/node-chakracore Dec 7, 2015
Issue #132 is fixed in chakracore so adding back the jslint rule of prefer-const.
This was referenced Sep 27, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants