-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Creating service worker responses from multiple readable streams, efficiently #59
Comments
Just to be clear, are you saying you don't want to just use a pipe for @tyoshino's example? Something like:
I guess because it doesn't flow the "final sink" events back the way you want? Personally, I think we need a separate progress event system separate from a writable stream anyway. I would prefer if we just did that. |
My initial thoughts:
I think writable stream is the way to go. You could imagine a design that solves this problem in a one-off way, like The writable stream also just seems like the more primitive thing, that more closely represents what's going on. Especially for the upload case.
It seems to me that what we're trying to expose is some kind of writable destination corresponding to the current service worker fetching cycle. So, maybe there is something like a writable Maybe a different design is to say that the "time-disconnected" Response object is just a convenient package of data you can use to feed to the "real" respondWith. So you have something like This seems like it might also extend to fetch uploads (fetch vs. "real fetch"), but I don't want to discuss that too much until I see some reactions to the rest of my point of view. |
Oh, this is pretty nice. It seems pretty optimizable too. Maybe I wrote all this up for nothing :). @tyoshino, @yutakahirano, what do you think? I guess it means that these "pipe" instances (i.e. identity transform streams) would have to be implemented specially, so that the engine knows to skip them. That seems OK though. |
This isn't an edge case either. I may push a header from the cache, then see that the body request failed, and push a different stream.
This could be a null transform stream, meaning you could do |
Could the revealer function be called when there is a direct connection? |
Here's the basic problem: self.onfetch = event => {
const responses = [
new Response(bodyWriter1 => ...),
new Response(bodyWriter2 => ...),
new Response(bodyWriter3 => ...)
];
event.respondWith(responses[Math.floor(Math.random(3))]);
}; Which of these bodyWriters represent the actual sink for data sent to the page?
In #30 we discussed very similar problems. One proposal was indeed to defer calling the body writing function until respondWith time or upload time. But that was deemed undesired for a variety of reasons... I should probably go re-read the thread to re-remember... |
Cheers! I should go and read #30 and stop derailing. |
I believe it was just insanely complex and getting hard for anyone to understand. Personally, I don't see the advantage of the revealer function over the pipe.
I think you could only do this for zero buffer pipes. In theory content code should be able to create a pipe with a specific buffer size. |
Looks good, but IIRC pipe optimizability was unclearer than we expected when we discussed #30. Is it clear now? |
I think it is not 100% clear but it should be doable. All the pieces are in place: pipeTo using non-public APIs, plus locking, should allow all three streams in the readable -> transform writable side -> transform readable side to have no way to interfere or observe the inner workings of the pipe process. The only real open question is how we signal the identity transform case. I think it would work to allow |
First, to be clear, I just raised concern about optimization in the private thread before this issue. I remember #30 and agreed that we start with ReadableStream based Request construction. I'm fine with that also for Response construction. I was going to investigate if the approach as Ben described in #59 (comment) work in whatwg/streams#359, but not yet started. |
Sorry for providing a stale (with revealing constructor) example. |
Here's how it'd look using a transform stream / pipe https://gist.github.com/jakearchibald/18562306e6cbbf975009 Much cleaner! |
In a private thread @tyoshino, @yutakahirano, @jakearchibald and I we were working through how we would enable the use case of composite streams. An example would be a service worker that streams the header from cache, the body from a fetch, and the footer from cache. We wanted to bring the discussion into the open before it got too far along.
The current thinking on this design is that you would do
new Response(body)
wherebody
is an author-created ReadableStream that you manually compose out of the other streams. @jakearchibald has an example of how to do this.However, this kind of manual shuffling of bytes through the service worker thread is not optimal. Ideally we would be able to do this stream composition off the service worker thread without involving JS. One design for that would be something like what Takeshi proposed, which I added as a comment on Jake's gist.
A very similar, but not identical, topic has previously been discussed: #30. There we were talking not about service worker-side response creation, but about "client side" request creation. It is a long thread, but @wanderview convinced us to not use the writable stream design. His main argument is that Request is "time disconnected", i.e., you can create them at any time, and so it is not feasible to supply a bodyWriter writable stream that represents the upload, since at Request creation time the upload hasn't started. @yutakahirano sums this up by saying "Just exposing a writable stream via a revealer function doesn't automatically provide a direct connection between a user and the final sink."
And this criticism is still true, even for the service worker-side response. I can construct Response objects all day long. It is only when they get connected to the main thread via
event.respondWith
that there is some connection to the final sink. So, @tyoshino's example (i.e. my gist comment) is not really conceptually correct.For this thread I would like us to investigate a few potential questions:
The text was updated successfully, but these errors were encountered: