-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not allow abort to happen when a close is about to happen #634
Conversation
As of #619, writer.abort() does not disturb the stream while an underlying-sink write or close operation is ongoing. However, it was still possible to cause both underlying sink abort and close to happen, with code like `writer.close(); writer.abort()`. This was because of the asynchronous way in which the stream's state transitions from "closing" to actually having [[inClose]] set to true. This addresses that problem by peeking at the queue when abort is called. If the state is [[closing]], and the queue contains no writes, we will not perform the underlying abort, and instead just let the underlying close happen. This counts as a success, so we immediately return a promise fulfilled with undefined. Fixes #632.
I think there might be an alternate way of implementing this, and of implementing the non-interruptibility of close/write, which is to leverage the [[queue]] more. In that formulation, abort() would clear the queue and then put an "abort" item in the queue, instead of its current behavior of immediately transitioning to the error state---which causes a number of strange things to happen. |
This looks okay to my brief review, but I'd like to think a bit more about whether the behavioural changes make sense.
I am optimistic about the potential of leveraging the [[queue]] to get the right behaviour. That might be the simpler implementation I was grasping for but couldn't quite reach. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've thought about the behaviour changes now and I like them, except for the last one I noted.
Sorry for the delay.
resolveClose(); | ||
return abortPromise.then(() => assert_false(abort_called, 'abort should not be called after close completes')); | ||
return abortPromise.then(() => { | ||
assert_array_equals(ws.events, ['close'], 'abort should not be called while close is pending'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
close is no longer pending here. Perhaps go back to 'after close completes' wording?
}); | ||
}, 'writer close() promise should resolve before abort() promise'); | ||
}, 'writer abort() promise should fulfill before close() promise (since it bails early)'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not keen on this behaviour. Usually we can rely on abort() to tell us whether the stream was cleaned up or not, but in this case we can't.
I don't care much about the ordering, so happy to change that. I'm still pretty unhappy with how this changes the spec though and would prefer to mess with [[queue]]. Maybe it is better to just get the behavior/test changes landed though and fix the internals to be more elegant later. I'll see what I can do. |
Yeah, doing the behaviour changes here and then messing with [[queue]] in a separate CL seems like a good way to unblock this. |
Sorry for belated reply. #655 has removed
So, all of ricea's list in #634 (comment) are satisfied. |
I think it should be delayed if we want to keep the behavior that Or, we could change |
My preference is for a fulfilled abort() to mean "the stream has been cleaned up and no more data will be written". Which implies that abort() has to wait for sink.close() to complete. If instead a fulfilled abort() means "sink.abort() was called and didn't reject" then this has to apply in all cases, ie. |
Implement the pipeTo() and pipeThrough() methods on the ReadableStream class. They are only enabled when V8 experimental extras are, eg. when the --enable-experimental-web-platform-features flag is used. The methods are not visible when running without flags. To achieve this, the methods are defined in ReadableStream.js but not added to the global object. ReadableStreamExperimentalPipeTo.js is executed at renderer startup time when the experimental flag is set and simply adds the two methods to the global ReadableStream object. This initial implementation is intentionally unoptimised. While it does use internal APIs it won't perform any better than a polyfill using the public APIs would. Additional optimisation work is expected after shipping. Also remove the failing expectations for many external/wpt/streams tests that use piping. serviceworker still have failing expectations due to using HTTP. When web-platform-tests/wpt@f36afea is rolled from upstream the failures will go away. The upstream version of the streams/piping/multiple-propagation.js test 'Piping from an errored readable stream to a closed writable stream' doesn't pass in Chromium . This is a known incompatibility with the reference implementation which will be resolved by whatwg/streams#634. This CL adds a modified version of the test in http/tests/streams/piping which verifies the behaviour of this implementation. I have filed http://crbug.com/684543 to track it. BUG=668951 Review-Url: https://codereview.chromium.org/2561443004 Cr-Commit-Position: refs/heads/master@{#446358}
In whatwg#634 (comment), @domenic proposed an alternative way of postponing abort() until after pending sink writes or closes are complete. The abort signal would be added to the queue in place of its current contents, and naturally processed after the current operation completed. This change attempts to implement that. Unfortunately, it is not working.
As of #619, writer.abort() does not disturb the stream while an underlying-sink write or close operation is ongoing. However, it was still possible to cause both underlying sink abort and close to happen, with code like
writer.close(); writer.abort()
. This was because of the asynchronous way in which the stream's state transitions from "closing" to actually having [[inClose]] set to true.This addresses that problem by peeking at the queue when abort is called. If the state is [[closing]], and the queue contains no writes, we will not perform the underlying abort, and instead just let the underlying close happen. This counts as a success, so we immediately return a promise fulfilled with undefined.
Fixes #632.
Careful review appreciated. A couple points worth questioning: