Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not allow abort to happen when a close is about to happen #634

Closed
wants to merge 1 commit into from

Conversation

domenic
Copy link
Member

@domenic domenic commented Dec 19, 2016

As of #619, writer.abort() does not disturb the stream while an underlying-sink write or close operation is ongoing. However, it was still possible to cause both underlying sink abort and close to happen, with code like writer.close(); writer.abort(). This was because of the asynchronous way in which the stream's state transitions from "closing" to actually having [[inClose]] set to true.

This addresses that problem by peeking at the queue when abort is called. If the state is [[closing]], and the queue contains no writes, we will not perform the underlying abort, and instead just let the underlying close happen. This counts as a success, so we immediately return a promise fulfilled with undefined.

Fixes #632.


Careful review appreciated. A couple points worth questioning:

  • This mechanism of peeking at the queue seems a bit suspect. I wonder if [[inClose]] is redundant with this, and we could consolidate somehow?
  • I'm not sure immediately returning a promise fulfilled with undefined is correct. What if closing fails? It would be better to return the appropriate underlying sink close promise, but that doesn't exist yet... hmm. Maybe we should just call WritableStreamDefaultControllerClose?
  • This behaves differently if the abort() happened during a write, as shown by the test. That is: write(); close(); abort(); will call the underlying abort, whereas just close(); abort(); will cause it to become closed. I think this seems correct, but I'm not sure.

As of #619, writer.abort() does not disturb the stream while an underlying-sink write or close operation is ongoing. However, it was still possible to cause both underlying sink abort and close to happen, with code like `writer.close(); writer.abort()`. This was because of the asynchronous way in which the stream's state transitions from "closing" to actually having [[inClose]] set to true.

This addresses that problem by peeking at the queue when abort is called. If the state is [[closing]], and the queue contains no writes, we will not perform the underlying abort, and instead just let the underlying close happen. This counts as a success, so we immediately return a promise fulfilled with undefined.

Fixes #632.
@domenic
Copy link
Member Author

domenic commented Dec 21, 2016

I think there might be an alternate way of implementing this, and of implementing the non-interruptibility of close/write, which is to leverage the [[queue]] more. In that formulation, abort() would clear the queue and then put an "abort" item in the queue, instead of its current behavior of immediately transitioning to the error state---which causes a number of strange things to happen.

@ricea
Copy link
Collaborator

ricea commented Jan 5, 2017

This looks okay to my brief review, but I'd like to think a bit more about whether the behavioural changes make sense.

write(); close(); abort() behaving differently from close(); abort(); is WIA. My view of the contraints is:

  1. Underlying-sink write() and close() should be treated as atomic
  2. abort() should happen ASAP but without violating 1.
  3. Underlying-sink abort() should never be called after underlying-sink close(). ← New with this change

I am optimistic about the potential of leveraging the [[queue]] to get the right behaviour. That might be the simpler implementation I was grasping for but couldn't quite reach.

Copy link
Collaborator

@ricea ricea left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've thought about the behaviour changes now and I like them, except for the last one I noted.

Sorry for the delay.

resolveClose();
return abortPromise.then(() => assert_false(abort_called, 'abort should not be called after close completes'));
return abortPromise.then(() => {
assert_array_equals(ws.events, ['close'], 'abort should not be called while close is pending');
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

close is no longer pending here. Perhaps go back to 'after close completes' wording?

});
}, 'writer close() promise should resolve before abort() promise');
}, 'writer abort() promise should fulfill before close() promise (since it bails early)');
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not keen on this behaviour. Usually we can rely on abort() to tell us whether the stream was cleaned up or not, but in this case we can't.

@domenic
Copy link
Member Author

domenic commented Jan 11, 2017

I don't care much about the ordering, so happy to change that. I'm still pretty unhappy with how this changes the spec though and would prefer to mess with [[queue]]. Maybe it is better to just get the behavior/test changes landed though and fix the internals to be more elegant later. I'll see what I can do.

@ricea
Copy link
Collaborator

ricea commented Jan 12, 2017

Yeah, doing the behaviour changes here and then messing with [[queue]] in a separate CL seems like a good way to unblock this.

@domenic
Copy link
Member Author

domenic commented Jan 19, 2017

I'm going to put this on hold until @tyoshino's PR #655 is merged since that is a bigger refactoring and might provide a better basis for this work.

@domenic domenic added do not merge yet Pull request must not be merged per rationale in comment and removed in progress labels Jan 19, 2017
@tyoshino
Copy link
Member

Sorry for belated reply. #655 has removed WritableStreamError() invocation from WritableStreamAbort(). Now, when there's ongoing sink.write() or sink.close(), WritableStreamAbort() does only:

  • rejecting writer.ready
  • schedule aborting by setting [[pendingAbortRequest]]

So, all of ricea's list in #634 (comment) are satisfied.

@tyoshino
Copy link
Member

tyoshino commented Jan 24, 2017

  • I'm not sure immediately returning a promise fulfilled with undefined is correct. What if closing fails? It would be better to return the appropriate underlying sink close promise, but that doesn't exist yet... hmm. Maybe we should just call WritableStreamDefaultControllerClose?

I think it should be delayed if we want to keep the behavior that WritableStreamAbort() returns a promise resolved with undefined. Even before #655, we specified that sink.close() rejection results in erroring the stream. So, it's unknown whether we'll end up with "errored" or "closed". I think it's not great to return a fulfilled promise immediately without knowing the result which is inconsistent with the behavior when it's known.

Or, we could change WritableStreamAbort() to return a promise rejected with a TypeError indicating that: the stream has already been requested to close() and it's about to or already processing or finished processing the close(), and therefore it doesn't make sense to run WritableStreamAbort() now.

@ricea
Copy link
Collaborator

ricea commented Jan 24, 2017

My preference is for a fulfilled abort() to mean "the stream has been cleaned up and no more data will be written". Which implies that abort() has to wait for sink.close() to complete.

If instead a fulfilled abort() means "sink.abort() was called and didn't reject" then this has to apply in all cases, ie. close().then(() => abort()) has to reject.

MXEBot pushed a commit to mirror/chromium that referenced this pull request Jan 27, 2017
Implement the pipeTo() and pipeThrough() methods on the ReadableStream
class. They are only enabled when V8 experimental extras are, eg. when the
--enable-experimental-web-platform-features flag is used. The methods are not
visible when running without flags.

To achieve this, the methods are defined in ReadableStream.js but not
added to the global object. ReadableStreamExperimentalPipeTo.js is
executed at renderer startup time when the experimental flag is set
and simply adds the two methods to the global ReadableStream object.

This initial implementation is intentionally unoptimised. While it does
use internal APIs it won't perform any better than a polyfill using the
public APIs would. Additional optimisation work is expected after
shipping.

Also remove the failing expectations for many external/wpt/streams tests
that use piping.

serviceworker still have failing expectations due to using HTTP. When
web-platform-tests/wpt@f36afea
is rolled from upstream the failures will go away.

The upstream version of the streams/piping/multiple-propagation.js test
'Piping from an errored readable stream to a closed writable stream'
doesn't pass in Chromium . This is a known incompatibility with the
reference implementation which will be resolved by
whatwg/streams#634. This CL adds a modified
version of the test in http/tests/streams/piping which verifies the
behaviour of this implementation. I have filed http://crbug.com/684543
to track it.

BUG=668951

Review-Url: https://codereview.chromium.org/2561443004
Cr-Commit-Position: refs/heads/master@{#446358}
ricea added a commit to ricea/streams that referenced this pull request Feb 15, 2017
In whatwg#634 (comment),
@domenic proposed an alternative way of postponing abort() until after
pending sink writes or closes are complete. The abort signal would be
added to the queue in place of its current contents, and naturally
processed after the current operation completed.

This change attempts to implement that.

Unfortunately, it is not working.
@domenic
Copy link
Member Author

domenic commented Mar 15, 2017

Closing as this was fixed by a25f410. We're tracking porting the tests from this PR in #694.

@domenic domenic closed this Mar 15, 2017
@domenic domenic deleted the close-then-abort branch October 31, 2017 23:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do not merge yet Pull request must not be merged per rationale in comment writable streams
Development

Successfully merging this pull request may close these issues.

Do not allow abort() to interrupt a closing writable stream?
3 participants