-
Notifications
You must be signed in to change notification settings - Fork 326
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide opt-in for Expect = "100-continue" #41
Comments
@tyoshino @mnot @wanderview @nikhilm do we still want this given the conclusion in #42? |
Also, @mnot, it does not seem like this header guarantees there will be no redirects. |
No, it doesn't. Expect/Continue is a mechanism for avoiding unnecessary transfer of the request body; it won't help you avoid buffering it. I think what you want to do is assure that developers can use expect/continue. It might be useful to have fetch do it natively too, but I think it's a very implementation-specific decision. Question: How does Fetch handle these sorts of genuinely optional, implementation-specific behaviours? In other places I see hand-waving about "do whatever HTTP stuff you want to do", but here it's pretty intrusive into the request handling model. I suppose you might do that by putting a note in to the effect "You can make an implementation choice to support Expect/Continue HERE, but we leave the details as an exercise for the reader." |
Ideally, if it's not specified, it's not allowed. But there's still quite a bit of handwaving the lower on the stack you go. Servers seem to rely on clients behaving in a certain way. I'm sure they're not going to be happy if we start sending them these headers without optin. |
Servers already get |
Even without the teeing optimization, supporting Expect: 100-continue could be useful for large requests, in particular file uploads. The server may be able to respond immediately with an error (like a 401 or 403), meaning that clients for whom network usage is important (mobile in particular) can avoid wasting network on a request whose body will be ignored. |
@mnot |
Could Fetch handle the 100-continue transparently? Or would that just lead to requests not to and let the client handle it? |
@scshunt we could let the client handle them and let the client handle when to transmit the request body (if the 100 response never shows). I thought we maybe wanted to expose them. Letting the client handle them certainly makes things easier. |
I think that's what you'd want to do. Note that Expect/Continue has lots of problems; defining its behaviour in fetch might help some of them, if it's done carefully. |
First step here is probably fixing #366. |
Given the discussion in yutakahirano/fetch-with-streams#66 I'm closing this. There's no real benefit in Note that #366 clarified that most 1xx (apart from 101) are to be ignored by the client. |
As @scshunt said, there's still benefit for avoiding large transfers. Not that I'm strongly arguing for it, just noting that the issue discussed over there is orthogonal to the one discussed here. |
Fair, though I'm not sure the added complexity is worth it just for that. The client is much more informed about whether it wants to initiate a large transfer or not. Also, with H2 and also with early responses you can somewhat mitigate the "damage". And with streaming uploads that are coming soon you should have all the control you need. |
I think one benefit not mentioned here is that a a POST w/ Expect: 100-continue that runs into a HTP/1.1 keepalive closure race can be detected and safely retried. |
Inspired by Server-sent Events, I'm trying to implement Client-sent Events based on fetch upload streaming, while Fetch API supports neither So I quit for now, either be patient or forget it. |
We could avoid teeing the request body with this protocol feature: http://httpwg.github.io/specs/rfc7231.html#header.expect
We would need to decide how much of the protocol is implemented in Fetch and how much we leave up to developers.
The text was updated successfully, but these errors were encountered: