-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[THREESCALE-9542] Part 2: Add support to proxy request with Transfer-Encoding: chunked #1403
[THREESCALE-9542] Part 2: Add support to proxy request with Transfer-Encoding: chunked #1403
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to add unit tests and integration tests for all the scenarios that reproduce the reported issue:
- http proxy env vars (over TLS)
- http_proxy policy (over TLS)
- camel_proxy policy (over TLS)
e3f8888
to
c1cae29
Compare
} | ||
--- backend env | ||
server_name test-backend.lvh.me; | ||
listen $TEST_NGINX_RANDOM_PORT ssl; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder why 3scale backend is configured with TLS connection
04a9b0c
to
addb342
Compare
addb342
to
94a825f
Compare
When a request with the HTTP "Transfer-Encoding: chunked" header is sent, APIcast buffers the entire request because by default it does not support sending chunked requests. However, when sending via proxy, APIcast does not remove the header sent in the initial request, which tells the server that the client is sending a chunk request. This then causes an Bad Request error because the upstream will not be able to determine the end of the chunk from the request. This commit removes the "Transfer-Encoding: chunked" header from the request when sending through a proxy.
…uffering disabled
94a825f
to
b02a888
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good.
However, this is complex and also hard to maintain. I want to try some other approach relying on lua-resty-http to 0.17.1 or some other library. If we cannot find a simpler (from the APIcast base code perspective), we can always use this code.
When using the python client, the response body is not shown. I wonder if APIcast is not handling the response correctly |
I am going to try https://github.com/ledgetech/lua-resty-http#set_proxy_options looks promising |
gateway/src/apicast/http_proxy.lua
Outdated
|
||
if http_methods_with_body[req_method] then | ||
if opts.request_unbuffered and ngx_http_version() == 1.1 then | ||
local _, err = handle_expect() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if Expect
needs to be handled for when buffering is enabled
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I do not think we should be doing this. The lua-resty-http lib is doing that for us. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lib lua-resty-http is a client library and it handles the Expect returned from the server, while we are acting as a server here and need to process the Expect header from the client.
When I sent a large payload using cURL, the request hung, I later found out it was due to the Expect header.
I will run some more tests to see whether we really need it here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I think I understand now.
I think that when buffered
is on, APIcast should protect upstream and should handle the Expect: 100-Continue
. That is, it is the apicast who returns HTTP Response 100 Continue
and then consumes the body before opening the connection to upstream. I think this is how it works right now in master
. The request Expect: 100-Continue
and response 100 Continue
happens twice. First time between downstream and then between apicast and upstream (done by lua resty http lib because the Expect header is still there). We might consider removing the expect header on "buffered" mode. Unless we want to keep the Expect protocol with upstream to avoid sending the body if upstream does not want to. Which also makes sense to me. It is actually a requirement from rfc2616#section-8.2.3 to be like this. Check Requirements for HTTP/1.1 proxies:
section.
When unbuffered is on, APIcast does not read the body with ngx.req.read_body()
, thus, it does not send 100 Continue
to downstream. I think that is the reason you saw the request hung. Ideally, I think that we should let upstream to decide if it wants to continue or not, and propagate the response to downstream. Downstream would start sending the body only when upstream tells to do that. I think it is quite hard to implement that. Basically because the lua resty http lib consumes the 100 Continue response of the upstream and then tries to send the body. I do not see a way to do this, other than sending manually the 100 Continue response to the downstream and create a body reader that will be consumed by the lua resty http library. But I can see some issues in there as well. What if upstream says 302 redirect or 400 bad request instead of 100 Continue? The downstream client would have already write the body in the downstream socket and that socket would be unusable for following up HTTP sessions. I do not know how to proceed regarding this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I heave re-written the message above. In case you have read it previosly, please re-read it again 🙏
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit confused here. I haven't read the openresty code but do you mean ngx.req.read_body()
will send 100 Continue
downstream? Doesn't that also mean that APIcast returns 100 Continue
to the downstream application before establishing the upstream connection?
Regarding the 400, please correct me if I'm wrong, but I think the only case where the upstream server returns this error is if there is data in the request body. In my head the flow will be as follow
client -> Expect: 100-Continue -> upstream -> 100 Continue -> client
client -> start sending body -> upstream read body -> return 400
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't read the openresty code but do you mean ngx.req.read_body() will send 100 Continue downstream?
Yes!
Doesn't that also mean that APIcast returns 100 Continue to the downstream application before establishing the upstream connection?
Exactly (when buffered mode is on)
the only case where the upstream server returns this error is if there is data in the request body
400 Bad Request is just an example. It could be 5XX error as well. In unbuffered mode, the workflow would be as follows (in my head)
client -> Expect: 100-Continue -> apicast
client <- 100 Continue <- apicast
client -> write body to socket -> apicast
# Apicast did not read the body yet, it just created a body reader from the socket
apicast -> create connection via proxy -> TLS upstream
apicast (lua resty http) -> Expect: 100-Continue -> TLS upstream
apicast (lua resty http) <- 100 Continue <- TLS upstream
apicast (lua resty http) -> send body from the body reader -> TLS upstream
So let's say that upstream does not want it to start upload:
client -> Expect: 100-Continue -> apicast
client <- 100 Continue <- apicast
client -> write body to socket -> apicast
# Apicast did not read the body yet, it just created a body reader from the socket
apicast -> create connection via proxy -> TLS upstream
apicast (lua resty http) -> Expect: 100-Continue -> TLS upstream
apicast (lua resty http) <- 5XX Error <- TLS upstream
client <- 5XX Error <- apicast
My issue with this is that the client has sent the body and nobody has consumed it. I need to try this scenario to see what we can do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From this nginx thread https://mailman.nginx.org/pipermail/nginx/2021-May/060643.html. I think nginx does not handle this well either
How about we send back error response, discard the body and close the socket?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about we send back error response, discard the body and close the socket?
It's aggressive, but can be a way out.
gateway/src/apicast/http_proxy.lua
Outdated
|
||
if is_chunked then | ||
-- If the body is smaller than "client_boby_buffer_size" the Content-Length header is | ||
-- set based on the size of the buffer. However, when the body is rendered to a file, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the body is smaller than "client_boby_buffer_size" the Content-Length header is set based on the size of the buffer
Who is doing that? In other words, when all the conditions meet:
- the request is chunked,
- buffering is enabled
- the request body is small
Who sets the Content-Length
header?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The lua-resty-http will set the Content-Length
based on the body that we passed in. But good catch I should have put more details in the comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I see, It's because it is a string and the resty-http gets the length out of it. It happens here. I would make it explicit, but good enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I agree because this is something that will come up again in future when troubleshooting but it doesn't need to be done in this PR, can be added at a later date that if headers["Content-Length"]=nil then headers["Content-Length"]=#body
(this will at least be a useful reference for now)
I tried it in #1434 in /http/proxy.lua |
I am ok going with these changes for now. First #1403 (comment) needs to be fixed. However, in following up PR's, this (spaghetti) code needs some simplification. It is hard to maintain and understand. The
|
With regular socket, openresty will process the Expect header on socket read, so we only need to send back "100 Continue" when using raw socket.
Fixed #1403 (comment) Also added 2 commits on top to handle |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a hard one, wasn't it!
Great job 🎖️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I might need a summary on what is the expected behaviour with the different combinations here to understand how this works. I am not sure I get this to be honest which is probably expected but then I would expect the README to describe in more detail so customers and Support can understand better.
If the tests are indeed correct then only an update to the README is needed.
One note though; why are there no unit tests added/modified?
Final comment: I think we are introducing I/O blocking operations via the file_size()
function and this is going to be executed on every request that meets the conditions. Executing things like os.execute
, io.open
etc would generally be safe in the init
and init_by_worker
phases because it's a one-time execution but in this scenario we are doing it for every request. Have we considered this already? How much is it harming performance as a result?
|
||
|
||
=== TEST 15: https_proxy with request_unbuffered policy, only upstream and proxy_pass will buffer | ||
the request |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do we mean "only upstream & proxy_pass will buffer the request"? Are we saying that the initial request to the camel proxy will be unbuffered? Why would that be given that this is over TLS so the tunnel should be established directly between APIcast & upstream. Is there another part of the code you are referring to that would be unbuffered?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it works like this
Without request_buffering policy (default behavior)
HTTPS request ---> APIcast ---> [reach proxy code] ---> [call ngx.req.get_body_data()] ---> [ request is buffered to a file] ---> [construct a new request and a body reader from buffered file] ---> [ use lua-resty-http to perform handshake and send new request to camel server] ---> Camel
With request_buffering policy
HTTPS request ---> APIcast ---> [reach proxy code] ---> [construct a new request and set up body reader from downstream socket (without reading the body first)] ---> [ use lua-resty-http to perform handshake and send new request to camel server] ---> Camel
Upstream here is the upstream
block in the test. It was a bit tricky to setup a test for this so I relied on a client request body is buffered to a temporary file
in the log.
proxy_pass
will buffer the request body by defaultecho_read_request_body
will also buffer the request.echo_request_body
used to echo the request body so we can check if the request was sent to the upstream server
But I will update README file with more details
gateway/src/apicast/http_proxy.lua
Outdated
-- set by openresty based on the size of the buffer. However, when the body is rendered | ||
-- to a file, we will need to calculate and manually set the Content-Length header based | ||
-- on the file size | ||
local contentLength, err = file_size(temp_file_path) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this safe to do for ALL requests which meet these conditions? I see that the calls in the file_size
function are I/O blocking calls so I am wondering how harmful to performance those could be given they are not executed within a coroutine. If a coroutine cannot be used then we should consider using the lua-io-nginx module for example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it enough to wrap that functionality with a coroutine? I don't know how useful that would be since it would yield on the first call anyway. Also the file_reader also call io.open
all every request that has body buffered to file, so I guess we pay the price of calling io.open
one more time?
But I totally agree with you that it is a I/O blocking function and should be avoided.
Checking the module lua-io-nginx
I can see that this module is currently considered experimental. And it seems like it runs the task on another thread. However, I'm not so sure about this because we have to pay for context switching, threads, locking, etc.
It's worth to mention that the cost time of a single I/O operation won't be reduced, it was just
transferred from the main thread (the one executes the event loop) to another exclusive thread.
Indeed, the overhead might be a little higher, because of the extra tasks transferring, lock waiting,
Lua coroutine resumption (and can only be resumed in the next event loop) and so forth. Nevertheless,
after the offloading, the main thread doesn't block due to the I/O operation, and this is the fundamental
advantage compared with the native Lua I/O library.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No sure how expensive is
function fsize (filename)
local handle, err = open(filename)
local current = handle:seek() -- get current position
local size = handle:seek("end") -- get file size
handle:seek("set", current) -- restore position
return size
end
Theoretically any IO operation could block the thread. We could try coroutines or any other mean to make it non blocking. Reading lua-nginx-module introduction it says:
Disk operations with relatively small amount of data can be done using the standard Lua io library but huge file reading and writing should be avoided wherever possible as they may block the Nginx process significantly. Delegating all network and disk I/O operations to Nginx's subrequests (via the [ngx.location.capture](https://github.com/openresty/lua-nginx-module#ngxlocationcapture) method and similar) is strongly recommended for maximum performance.
Not sure if we can follow that recommendation, though. @tkan145 we can try to discuss about this in a short call..
Anyway, AFAIK, we have never measured the capacity of APICast to handle traffic with request body big enough to be persisted in disk. All the tests performed where "simple" GET requests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed and also updated README file. @kevprice83 can you help review the README file and let me know if I need to add anything else?
No need to say a single word more about this. If you do not understand, nobody else will. We definitely need to update The big value IMO of the new
The behavior being described by nginx doc when proxy_request_buffering is off.
Whereas when this policy is not in the chain, the behavior (for all the scenarios above) is the one described for proxy_request_buffering is on. I quote here
Furthermore, the behavior is also consistent regardless of the transfer encoding used. Either requests with known request body length with the "content-length" header or chunked requests, request buffering (or not) semantics will apply. |
`request_unbuffered` policy is enabled or not. | ||
- For a request with "small" body that fits into [`client_body_buffer_size`](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size) and with header "Transfer-Encoding: chunked", NGINX will always read and know the length of the body. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is nice to note, but it is not a caveat (a limitation), it is expected and correct. All about unbuffered request is the fact, as you correctly pointed out, that the request body will be sent to the proxied server immediately as it received
. The transfer encoding is HOP-BY-HOP encoding and nothing prevents from changing it from one hop to the next one.
6fd6470
to
f19b9e5
Compare
For example, when the client sends 10GB, NGINX will buffer the entire 10GB to disk before sending anything to | ||
the upstream server. | ||
|
||
When `proxy_request_buffering` is in the chain, request buffering will be disabled and the request body will be sent to the proxied server immediately as it received. This can help minimize time spent sending data to a service and disk I/O for requests with big body. However, there are caveats and corner cases applied, [**Caveats**](#caveats) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
proxy_request_buffering
is not the name of the policy, is it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops
I kindly requested review to @3scale/documentation team |
|
||
## Technical details | ||
|
||
By default, NGINX reads the entire request body into memory (or buffers large requests into disk) before proxying it to the upstream server. However, reading bodies can become expensive, especially when requests with large payloads are sent. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By default, NGINX reads the entire request body into memory (or buffers large requests into disk) before proxying it to the upstream server. However, reading bodies can become expensive, especially when requests with large payloads are sent. | |
By default, NGINX reads the entire request body into memory or buffers large requests to disk before forwarding them to the upstream server. Reading bodies can become expensive, especially when sending requests containing large payloads. |
For example, when the client sends 10GB, NGINX will buffer the entire 10GB to disk before sending anything to | ||
the upstream server. | ||
|
||
When `proxy_request_buffering` is in the chain, request buffering will be disabled and the request body will be sent to the proxied server immediately as it received. This can help minimize time spent sending data to a service and disk I/O for requests with big body. However, there are caveats and corner cases applied, [**Caveats**](#caveats) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When `proxy_request_buffering` is in the chain, request buffering will be disabled and the request body will be sent to the proxied server immediately as it received. This can help minimize time spent sending data to a service and disk I/O for requests with big body. However, there are caveats and corner cases applied, [**Caveats**](#caveats) | |
When the `proxy_request_buffering` is in the chain, request buffering is disabled, sending the request body to the proxied server immediately upon receiving it. This can help minimize time spent sending data to a service and disk I/O for requests with big body. However, there are caveats and corner cases applied, [**Caveats**](#caveats) |
The response buffering is enabled by default in NGINX (the [`proxy_buffering: on`]() directive). It does | ||
this to shield the backend against slow clients ([slowloris attack](https://en.wikipedia.org/wiki/Slowloris_(computer_security))). | ||
|
||
If the `proxy_buffering` is disabled, the upstream server will be forced to keep the connection open until all data has been received by the client. Thereforce, NGINX [advises](https://www.nginx.com/blog/avoiding-top-10-nginx-configuration-mistakes/#proxy_buffering-off) against disabling `proxy_buffering` as it will potentially waste upstream server resources. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the `proxy_buffering` is disabled, the upstream server will be forced to keep the connection open until all data has been received by the client. Thereforce, NGINX [advises](https://www.nginx.com/blog/avoiding-top-10-nginx-configuration-mistakes/#proxy_buffering-off) against disabling `proxy_buffering` as it will potentially waste upstream server resources. | |
If the `proxy_buffering` is disabled, the upstream server keeps the connection open until all data is received by the client. NGINX [advises](https://www.nginx.com/blog/avoiding-top-10-nginx-configuration-mistakes/#proxy_buffering-off) against disabling `proxy_buffering` as it will potentially waste upstream server resources. |
|
||
## Caveats | ||
|
||
- Because APIcast allows defining mapping rules based on request content, ie `POST /some_path?a_param={a_value}` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Because APIcast allows defining mapping rules based on request content, ie `POST /some_path?a_param={a_value}` | |
- APIcast allows defining of mapping rules based on request content. For example, `POST /some_path?a_param={a_value}` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added a few suggestions.
c982788
to
b494ca8
Compare
After running again the verification steps, The first request with small body (chunked TE), reaches upstream still with the TE chunked. And I think this is expected. As for this use case (TE chunked and TLS upstream through proxy) APIcast generates a socket reader that decodes the chunk encoding and re-encodes it back to be forwarded. So the request reaching upstream has to be TE chunked.
You might want to update verification steps. |
|
||
## Why does upstream receive a "Content-Length" header when the original request is sent with "Transfer-Encoding: chunked" | ||
|
||
For a request with "small" body that fits into [`client_body_buffer_size`](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size) and with header "Transfer-Encoding: chunked", NGINX will always read and know the length of the body. Then it will send the request to upstream with the "Content-Length" header. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not true for the use case this PR is taking care of. Only when nginx handles the proxying task.
Actually, for a future enhancement, we could do the same. Read one buffer from the socket with size (S configurable). If all the full body has been read, send upstream with content-length. If the body full body is not read, proxy the request with TE: chunked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated verification steps. I will address this is the future PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For me this is mergeable.
I left a couple of comments that are just nitpicks.
This is exactly what we need. Unfortunately the way the integration tests are written makes that hard to understand, not sure if they can be worded or written in another way but honestly the explanation above should be enough for most situations I think. It's perfect. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks really good to me now especially with the new README. I'd still like to see the unit tests in there but it's not urgent.
Another question I had to ensure my understanding:
So when an HTTPS proxy is used and TE: chunked and body size is greater than the body_buffer_size then APIcast will buffer, decode then re-encode the request so that the upstream will still receive the originally TE: chunked request format right?
No need to twist it too much. It is much simpler. When an HTTPS proxy is used and TE;
|
@dfennessy your approval is required as you requested changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
What:
Fix https://issues.redhat.com/browse/THREESCALE-9542
This PR adds support to proxy the request with "Transfer-Encoding: chunked" when using with the proxy server.
Note to reviewers
Please just review the last 2 commits. I will rebase once part 1 merged
Verification steps:
Checkout this branch
Make runtime-image IMAGE_NAME=apicast-test
cd dev-environments/https-proxy-upstream-tlsv1.3 make certs make gateway IMAGE_NAME=apicast-test
The request should return 200 OK. Note that upstream echo API is reporting that the request included
Transfer-Encoding: chunked
header and the expected body.First get the APICast IPAddress
Replace
127.0.0.1
with the IP of APIcast gateway above