Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow a ReadableByteStream in fetch({body: <body>}) #577

Closed
ConradIrwin opened this issue Aug 11, 2017 · 17 comments
Closed

Allow a ReadableByteStream in fetch({body: <body>}) #577

ConradIrwin opened this issue Aug 11, 2017 · 17 comments

Comments

@ConradIrwin
Copy link

I'd like progress events on uploads. I think this should work according to the current spec, but it doesn't seem to work in practice, as I'm not sure how to tell the fetch "how much stream" it should read.

  function upload (url, blob, onUploadProgress) {
    let progress = 0
    let readableByteStream = new ReadableStream({
      type: 'byte'
      pull: function (controller) {
        onUploadProgress({todo: blob.size, done: progress})
        let nextProgress = progress + controller.desiredSize
        controller.enqueue(@blob.slice(progress, nextProgress))
        progress = nextProgress
      }
    })

    return fetch(url, {
      mode: 'cors'
      method: 'POST'
      headers: {'Content-Type': blob.type}
      body: readableByteStream
    })
  }
@annevk
Copy link
Member

annevk commented Aug 12, 2017

The reason it doesn't work in practice is because it isn't implemented yet.

@ConradIrwin
Copy link
Author

Nice! But this will be how it works?

Looking forward to it — had to roll out an XmlHttpRequest to handle uploads last week :(.

@wanderview
Copy link
Member

You should be able to use fixed sized bodies (string, blob, etc) for upload streams today. So if you did an XHR thing I think you should be able to do the same with fetch().

Note, there is a limitation on what servers will support streamed uploaded. It requires chunked encoding which pretty much limits it to h2. (You can't negotiate chunked encoding in h1.1 until after request has been sent, which is too late.)

@annevk
Copy link
Member

annevk commented Aug 14, 2017

@wanderview that shouldn't be too late really (and H/2 doesn't do chunked encoding). You can just send the request and if the server affirms you can start sending the request body. Though it also seems fine that if ReadableStream is passed you just assume support since it won't work anyway otherwise.

@wanderview
Copy link
Member

I guess I will refer you to @mcmanus. I believe this is what he told me.

@ConradIrwin
Copy link
Author

How does the XmlHttpRequest upload progress event work? I naively assumed it was just periodically pinging me with the number of bytes that had been ACKed by the upstream server. (I'm uploading to Google Cloud Storage, and it seemed to work automatically).

I was hoping the same would happen for fetch() where I could see at a low level what bytes were written to the network, and when.

@wanderview
Copy link
Member

XHR progress is based on browser writing bytes to OS socket. Not based on server acks.

Fetch API needs a progress interface added to get equivalent behavior. The stream reading wont be quite the same since buffering may take place.

@ConradIrwin
Copy link
Author

It depends how much buffering I guess. It looks like the XHR equivalent is pretty low fidelity (and if it's just bytes written, not bytes acked, also subject to queueing and over-promising).

For what it's worth, we have an internal fetch() wrapper that takes an onProgress callback, and falls back to XHR if it's specified. It seemed nicer than trying to figure out an evented way of doing it given that fetch() doesn't have an instance that you can attach events to.

fetch(url, {
  body: blob,
  method: 'PUT',
  onProgress: ({status, loaded, total}) => {
    // status is one of "loading" (before the returned promise resolves or rejects), "success" (after the returned promise resolves), "error" (after the returned promise rejects)
    // loaded is the bytes transferred as reported by XmlHttpRequest's ProgressEvent
    // total is the total bytes as reported by XmlHttpRequest.
  }
})

I also considered having fetch() return a thenable event emitter instead of a promise; but because we're using a mix of promise libraries already, that seemed like a recipe for confusion.

Would love to see an official solution to this problem, either by specifying how much buffering is allowed in the ReadableStream case, or just adding a callback.

@annevk
Copy link
Member

annevk commented Aug 15, 2017

@wanderview I don't see anything about that in https://tools.ietf.org/html/rfc7230#section-4 and haven't heard this concern until now. Why did nobody raise an issue? @mcmanus?

@mcmanus
Copy link

mcmanus commented Aug 15, 2017

due to the lack of threading here I can't figure out the question I'm being asked. can you rephrase?

@wanderview
Copy link
Member

due to the lack of threading here I can't figure out the question I'm being asked. can you rephrase?

Patrick, the question is can we support chunked encoding uploads without a known fixed size? I thought you had told me this was only possible in h2.

@wanderview I don't see anything about that in https://tools.ietf.org/html/rfc7230#section-4 and haven't heard this concern until now. Why did nobody raise an issue?

I am pretty sure we've had this conversation before and I did bring it up earlier both with you and @domenic. I can't find the issue now, though. It may be buried in some tangential streams spec issue.

@wanderview
Copy link
Member

Found it. It was in the fetch-with-streams repo. See yutakahirano/fetch-with-streams#30 (comment), for example.

@annevk
Copy link
Member

annevk commented Aug 15, 2017

@wanderview that became yutakahirano/fetch-with-streams#57 and the conclusion there was that this wasn't a problem.

@mcmanus
Copy link

mcmanus commented Aug 15, 2017

Patrick, the question is can we support chunked encoding uploads without a known fixed size? I thought you had told me this was only possible in h2.

h2 only supports streamed bodies (true of both directions) - it doesn't call them chunked, but that's effectively what they are.

h1 only reliably supports chunk downstream on the Internet, though 1.1 specifies it in both directions so you could try it as long as you're willing to tolerate the errors which will come in a variety of forms (early, late, hangs, etc..).. this is really an e2e problem, so you'll still get some errors even if you apriori know the server deals with chunked uploads, or that you are speaking h2 (because those properties might not be true e2e).

@wanderview
Copy link
Member

Yea, we basically punted the problem to the web developer. It may break with particular h1.x servers, but we're not going to protect you. The underlying problem hasn't really gone away. (I'm still not happy with that resolution.)

I think my advice above that you might want to plan to only use this with h2 is still reasonable.

@jakearchibald
Copy link
Collaborator

#95 (comment) – discussing how to enable developers to set Content-Length of streamed bodies so chunked encoding isn't needed.

@annevk
Copy link
Member

annevk commented Sep 4, 2017

Closing this due to lack of a problem with Fetch.

@annevk annevk closed this as completed Sep 4, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

5 participants