Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance Check vs. Node Streams #304

Open
bmeck opened this issue Mar 20, 2015 · 4 comments
Open

Performance Check vs. Node Streams #304

bmeck opened this issue Mar 20, 2015 · 4 comments

Comments

@bmeck
Copy link

bmeck commented Mar 20, 2015

Just want to start an issue to track performance concerns vs Node/IO.js streams implementations. I know that streams are currently a major bottleneck in performing IO and would be curious to see if we can make a benchmark and at least ensure we can reach performance parity. Having a nice API is one thing, but if we lose performance doing it we will be in trouble.

@trevnorris
Copy link

Here's a very basic benchmark that I've attempted to make as fair as possible (feedback on how to do better is welcome):https://gist.github.com/trevnorris/4a8b6dd856cf3e1b4268

Part of what you'll see is that V8 native Promises is a significant hit, but also the implementation itself (of course having gone through Babel).

@domenic
Copy link
Member

domenic commented Mar 21, 2015

Yeah, the purpose of the reference implementation is to transliterate the spec directly into JS, and not to be a usable polyfill. In particular a lot of the stuff we're doing around the queue-with-sizes is probably ridiculously unoptimized, not just at a VM level but also on an algorithmic level.

Next month I'll likely be working on a V8 extension version of the streams, with a target of maybe having it ship. That one will actually be optimized.

And yeah, the V8 promises being slow thing is going to really suck :-/.

@domenic
Copy link
Member

domenic commented Mar 21, 2015

One big performance win will be #97, which is explicitly being designed for.

@bmeck
Copy link
Author

bmeck commented Mar 21, 2015

Before shipping an implementation and forcing API stagnation /
stabilization I truly believe we should profile this reference
implementation and be ABSOLUTELY sure the VM implementors have a chance to
check the profile of this API and not enter into a strange situation where
some minute point prevents performance.

Also, I truly hope we come up with resource finalization before that goes
out... I am still reeling from not being able to have a cancelation route
of promises.

It just seem premature and ignorant to talk about performance being able to
be optimized when no benchmark/profiling has been investigated.

On Sat, Mar 21, 2015 at 1:34 AM, Domenic Denicola notifications@github.com
wrote:

One big performance win will be #97
#97, which is explicitly being
designed for.


Reply to this email directly or view it on GitHub
#304 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants